Showing posts with label childhood. Show all posts
Showing posts with label childhood. Show all posts

Sunday, May 18, 2025

Unsupervised AI Children

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Recent “AI” hype

Since the introduction of the Large Language Model (LLM), the pace of new tools and technologies has been breathtaking. Those who are not producing such tech are scrambling to figure out how to use it. Literally every day there's something new.

Against this backdrop, Google has recently announced a technology it calls AlphaEvolve, which it summarizes as “a Gemini-powered coding agent for designing advanced algorithms” According to one of its marketing pages:

“Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.»

Early Analysis

The effects of such new technologies are hard to predict, but let's start what's already been written.

In an article in ars technica, tech reporter Ryan Whitwam says of the tech:

«When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.»

It's interesting to note that I found this commentary by Whitwam from AlphaEvolve's Wikipedia page, which had already re-summarized what he said as this (bold mine to establish a specific focus):

«its architecture allows it to evaluate code programmatically, reducing reliance on human input and mitigating risks such as hallucinations common in standard LLM outputs.»

Whitwam actually hadn't actually said “mitigating risks,” though he may have meant it. His more precise language, “improving accuracy” speaks to a much narrower goal of specific optimization of modeled algorithms, and not to the broader area of risk. These might seem the same, but I don't think they are.

To me—and I'm not a formal expert, just someone who's spent a lifetime thinking about computer tech ethics informally—risk modeling has to include a lot of other things, but most specifically questions of how well the chosen model really captures the real problem to be solved. LLMs give the stagecraft illusion of speaking fluidly about the world itself in natural language terms, and that creates all kinds of risks of simple misunderstanding between people because of the chosen language, as well as failures to capture all parts of the world in the model.

Old ideas dressed up in a new suit

In a post about this tech on LinkedIn, my very thoughtful and rigorously meticulous friend David Reed writes:

«30 years ago, there was a craze in computing about Evolutionary Algorithms. That is, codes that were generated by random modification of the source code structure and tested against an “environment” which was a validation test. It was a heuristic search over source code variations against a “quality” or “performance” measure. Nothing new here at all, IMO, except it is called “AI” now.»

I admit haven't looked at the tech in detail, but I trust Reed's assertion that the current interation of the tech is primarily less grandiose than Google's hype suggests—at least for now.

But that doesn't mean more isn't coming. And by more, I don't necessarily mean smarter. But I do mean that it will be irresistible for technologists to turn this tech upon itself and try exactly what Google sounds like it's wanting to claim here: that unsupervised evolutionary learning will soon mean “AI”—in the ‘person’ of LLMs—can think and evolve on their own.

Personally, I'm confused by why people even see it as a good goal, as I discussed in my essay Sentience Structure. You can read that essay if you want the detail, so I won't belabor that point here. I guess it comes down to some combination of a kind of euphoria that some people have over just doing something new combined with a serious commercial pressure to be the one who invents the next killer app.

I just hope it's not literally that—an app that's a killer.

Bootstrapping analysis by analogy

In areas of new thought, I reason by analogy to situations of similar structure in order to derive some sense of what to expect, by observing what happens in analogy space and then projecting back into the real world to what might happen with the analogously situated artifacts. Coincidentally, it's a technique I learned from a paper (MIT AIM-520) written by Pat Winston, head of the MIT AI lab back when I was studying and working there long ago — when what we called “AI” was something different entirely.

Survey of potential analogy spaces

Capitalism

I see capitalism as an optimization engine. But any optimization engine requires boundary conditions in order to not crank out nonsensical solutions. Optimization engines are not "smart" but they do a thing that can be a useful tool in achieving smart behavior.

Adam Smith, who some call the father of modern capitalism, suggested that if you want morality in capitalism, you must encode it in law, that the engine of capitalism will not find it on its own. He predicted that absent such encoding, capitalists would tend toward being tyrants.

Raising Children

Children are much smarter than some people give them credit for. We sometimes think of kids getting smarter with age or education, but really they gain knowledge and context and, eventually, we hope, empathy. Young children can do brilliant but horrifying things, things that might hurt themselves or others, things we might call sociopathic in adults, for lack of understanding of context and consequence. We try to watch over them as they grow up, helping them grow out of this.

It's why we try kids differently than adults sometimes in court. They may fail to understand the consequences of their actions.

Presuppositions

We in the general public, the existing and future customers of “AI” are being trained by use of tools like ChatGPT to think of an “AI” as something civil because the conversations we have with them are civil. But with this new tech, all bets are off. It's just going to want to find a shorter path to the goal.

LLM technology has no model of the world at all. It is able to parrot things, to summarize things, to recombine and reformat things, and a few other interesting tricks that combine to give some truly dazzling effects. But it does not know things. Still, for this discussion, let's even suspend disbelief and assume that there is some degree of modeling going on in this new chapter of “AI” if the system thinks it can improve its score.

Raising “AI” Children

Capitalism is an example of something that vaguely models the world by assigning dollar values to a great many things. But many find ourselves routinely frustrated by capitalism because it seems to behave sociopathically. Capitalists want to keep mining oil when it's clear that it is going to drive our species extinct, for example. But it's profitable. In other words, the model says this is a better score because the model is monetary. It doesn't measure safety, happiness (or cruelty), sustainability, or a host of other factors unless a dollar score is put on those. The outcome is brutal.

My 2009 essay Fiduciary Duty vs The Three Laws of Robotics discusses in detail why this behavior by corporations is not accidental. But the essence of it is that businesses do the same thing that sociopaths do: they operate without empathy, focusing single-mindedly on themselves and their profit. In people, we call that sociopathy. Since corporations are sometimes called “legal people,” I make the case in the essay that corporations are also “legal sociopaths.”

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Young children growing up tend to be very self-focused, too. They can be cruel to one another in play, and grownups need to watch over them to make sure that appropriate boundaries are placed on them. A sense of ethics and personal responsibility does not come overnight, but a huge amount of energy goes into supervising kids before turning them loose on the world.

And so we come to AIs. There is no reason to suspect that they will perform any differently. They need these boundary conditions, these rules of manners and ethics, a sense of personal stake in the world, a sense of relation to others, a reason not to behave cruelly to people. The plan I'm hearing described, however, falls short of that. And that scares me.

I imagine they think this can come later. But this is part of the dance I have come to refer to as Technology's Ethical Two-Step. It has two parts. In the first part, ethics is seen as premature and gets delayed. In the second part, ethics is seen as too late to add retroactively. Some nations have done better than others at regulating emerging technology. The US is not a good example of that. Ethics is something that's seen as spoiling people's fun. Sadly, though, an absence of ethics can spoil more than that.

Intelligence vs Empathy

More intelligence does not imply more empathy. It doesn't even imply empathy at all.

Empathy is something you're wired for, or that you're taught. But “AI” is not wired for it and not taught it. As Adam Smith warned, we must build it in. We should not expect it to be discovered. We need to require it in law and then productively enforce that law, or we should not give it the benefit of the doubt.

Intelligence without empathy ends up just being oblivious, callous, cruel, sociopathic, evil. We need to build “AI” differently, or we need to be far more nervous and defensive about what we expect “AI” that is a product of self-directed learning to do.

Unsupervised AI Children—what could possibly go wrong?

The “AI” tech we are making right now are children, and the suggestion we're now seeing is that they be left unsupervised. That doesn't work for kids, but at least we don't give kids control of our critical systems. The urgency here is far greater because of the accelerated way that these things are finding themselves in mission-critical situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

You may also enjoy these other essays by me on related topics:

The graphic was created at abacus.ai using RouteLLM (which referred me to GPT-4.1) and rendered by GPT Image. I did post-processing in Gimp to add color and adjust brightness in places.

Saturday, March 21, 2009

Intuition and Knowledge

I wrote a post recently, Knowledge and Intuition, in which I created some minor confusion. [Yang and Yin: Intuition and Knowledge] This follow-up post doesn't make any really major new points, it just clarifies my previous intent so that, hopefully, I can build upon it another day.

It's ok to stop reading now if that's not your cup of tea. But if you haven't read the other one, you should read this first and then the other. It will make more sense that way.

My previous remarks were intended not to capture the mechanism of intuition as much as to attach a name to the goals, attitudes, expectations, hopes and even fears we have starting out in life.

And when I speak of children, I mean it in the most general and encompassing sense: people who have not yet been tested by life. People who have lived in the protective shell provided by their parents and society, and who have never had to fend for themselves in the world as it really is, with the responsibilities that society is prepared to place on them as first-class individuals.

For want of a better word, I refer to our starting image of the world as our “intuition.” It is a guided intuition, but it is an intuition nonetheless. Unlike the other animals, nature has equipped our minds to allow some of our intuitions to be downloaded from our parents. But what we teach children about civics is only intuitions compared to the reality of what it is to try to get what you need from a real-world government. What we teach people about having a job (or not having one), or about having a family, is just an intuition compared to the experience of actually doing. For purposes of this discussion, that quality which cannot be downloaded in advance and which is the tangible texture of life played out, I call “knowledge.”

Those words have other meanings in other contexts, and I'm not trying to co-opt or limit their meanings. I'm just trying to establish a window into my mind so you can see how I think about these things using the words I prefer.

And so, having been educated as children in our youth, we develop an expectation of how the world will play out. We imagine what the world will be. We have our intuitions. But the world is not, in fact, what we imagine. It cannot play out simply in the ways we imagine. What we come to know of the world will be at odds with those intuitions.

For some, life is a struggle between people and the world around them. How much can a person affect the world and how much does it affect them. It's easy for knowledge to wear down intuition; it's important to remember to constantly refresh one's intuitions so that we can make the world more like we'd like it to be, not just make ourselves more like the world wants us to be.

I may use these terms again, so I wanted to at least clarify my intent. And it may also make some of my meaning in the original post clearer.


Author's Notes:

If you got value from this post, please “Share” it.

Public domain yin/yang symbol obtained from Wikipedia.
Text and composed artwork copyright © 2009 by Kent M. Pitman.

This post is a sequel to my earlier post:
Knowledge and Intuition

Originally published March 21, 2009 at Open Salon, where I wrote under my own name, Kent Pitman.

Tags (from Open Salon): politics, knowledge, intuition, balance, understanding, contentment, war, peace, hunger, suffering, child, children, adults, adulthood, yin, yang, life, life lesson, philosophy, teacher, student, talk, listen

Thursday, March 19, 2009

Knowledge and Intuition

Note to the Reader: When first published, this article confused some readers by my use of the term “intuition.” I am not attempting here to define any general concept of intuition. Rather, I am noting that knowledge fills a gap that represents a preconception about what we expect to learn, or what we think about the world before we are given more mature or better accepted knowledge. We assume our preconceptions, what I here call “intuitions,” are not as good as the “knowledge” that seeks the replace them, but the reality is occasionally more complex. This article is about that surprise complexity.

If this post oversimplifies things, that's probably good. It will make my point more clear. The point is not technical anyway, it is intuitional. It can only be injured by adding technical clarity.

[Yin and Yang: Knowledge and Intuition]

Knowledge and intuition are Yin and Yang, complementary opposites.

We begin our lives with intuitions about what we expect the world to be. Through our growth, we acquire knowledge. Often at the expense of our early intuitions. We spend a lot of our time learning why the world cannot be what we hoped it would be.

It is the rare person who succeeds in acquiring knowledge without losing his vision of why he wanted that knowledge, of what justified the expense of acquiring that knowledge.

Our early instruction of children emphasizes simple truths, sometimes oversimplifying, but offering echos of what we wish the world really were. Or sometimes even what we used to wish what the world was before we forgot that wishes were of value.

Children know how they want the world. They want it free of guns, of violence, of war. They want no one denied health care or left starving.

We explain why these are not goals, why they never could be. Soon enough, they forget they even wanted them. Then we smile approvingly and call them adults.

Computer novices ask for computers to be smart. But we explain to them about how to articulate their problems well enough that they can Google for workarounds. Soon enough, they are so proud of their own ability to overcome computer stupidity they've forgotten it would be better if they didn't have to. Then we smile again approvingly and call them computer literate.

Knowledge wears down intuition.

We bring children into the world in part to remind ourselves as a society of what we started out to be. Not having yet become jaded, they ask anew the hard questions we'd forgotten we used to ask. All too quickly, the reflexive temptation is to answer them, rather than to hear their inquiries as an opportunity for reflection: Are we going in the direction we set out to? Are we sure there was no other way?

They often try to find that better way. Sometimes they learn, as we did, that it's elusive. But sometimes they do better than those that came before them. In many ways, the virtue is in the trying.

If you're an expert who speaks routinely with others who know less about your area of expertise, always remember that they may have something that you may have lost—that in offering your knowledge, perhaps, if you also listen, you'll be lucky enough to recover some of the intuition you lost in acquiring that knowledge.


Author's Notes:

If you got value from this post, please “Share” it.

There is a sequel to this point, Intuition and Knowledge, which adds some clarifications and additional thoughts.

Public domain yin/yang symbol obtained from Wikipedia.
Text and composed artwork copyright © 2009 by Kent M. Pitman.

Originally published March 19, 2009 at Open Salon, where I wrote under my own name, Kent Pitman.

Tags (from Open Salon): politics, knowledge, intuition, balance, understanding, contentment, war, peace, hunger, suffering, child, children, adults, adulthood, yin, yang, life, life lesson, philosophy, teacher, student, talk, listen