Showing posts with label risk. Show all posts
Showing posts with label risk. Show all posts

Sunday, May 18, 2025

Unsupervised AI Children

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Recent “AI” hype

Since the introduction of the Large Language Model (LLM), the pace of new tools and technologies has been breathtaking. Those who are not producing such tech are scrambling to figure out how to use it. Literally every day there's something new.

Against this backdrop, Google has recently announced a technology it calls AlphaEvolve, which it summarizes as “a Gemini-powered coding agent for designing advanced algorithms” According to one of its marketing pages:

“Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.»

Early Analysis

The effects of such new technologies are hard to predict, but let's start what's already been written.

In an article in ars technica, tech reporter Ryan Whitwam says of the tech:

«When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.»

It's interesting to note that I found this commentary by Whitwam from AlphaEvolve's Wikipedia page, which had already re-summarized what he said as this (bold mine to establish a specific focus):

«its architecture allows it to evaluate code programmatically, reducing reliance on human input and mitigating risks such as hallucinations common in standard LLM outputs.»

Whitwam actually hadn't actually said “mitigating risks,” though he may have meant it. His more precise language, “improving accuracy” speaks to a much narrower goal of specific optimization of modeled algorithms, and not to the broader area of risk. These might seem the same, but I don't think they are.

To me—and I'm not a formal expert, just someone who's spent a lifetime thinking about computer tech ethics informally—risk modeling has to include a lot of other things, but most specifically questions of how well the chosen model really captures the real problem to be solved. LLMs give the stagecraft illusion of speaking fluidly about the world itself in natural language terms, and that creates all kinds of risks of simple misunderstanding between people because of the chosen language, as well as failures to capture all parts of the world in the model.

Old ideas dressed up in a new suit

In a post about this tech on LinkedIn, my very thoughtful and rigorously meticulous friend David Reed writes:

«30 years ago, there was a craze in computing about Evolutionary Algorithms. That is, codes that were generated by random modification of the source code structure and tested against an “environment” which was a validation test. It was a heuristic search over source code variations against a “quality” or “performance” measure. Nothing new here at all, IMO, except it is called “AI” now.»

I admit haven't looked at the tech in detail, but I trust Reed's assertion that the current interation of the tech is primarily less grandiose than Google's hype suggests—at least for now.

But that doesn't mean more isn't coming. And by more, I don't necessarily mean smarter. But I do mean that it will be irresistible for technologists to turn this tech upon itself and try exactly what Google sounds like it's wanting to claim here: that unsupervised evolutionary learning will soon mean “AI”—in the ‘person’ of LLMs—can think and evolve on their own.

Personally, I'm confused by why people even see it as a good goal, as I discussed in my essay Sentience Structure. You can read that essay if you want the detail, so I won't belabor that point here. I guess it comes down to some combination of a kind of euphoria that some people have over just doing something new combined with a serious commercial pressure to be the one who invents the next killer app.

I just hope it's not literally that—an app that's a killer.

Bootstrapping analysis by analogy

In areas of new thought, I reason by analogy to situations of similar structure in order to derive some sense of what to expect, by observing what happens in analogy space and then projecting back into the real world to what might happen with the analogously situated artifacts. Coincidentally, it's a technique I learned from a paper (MIT AIM-520) written by Pat Winston, head of the MIT AI lab back when I was studying and working there long ago — when what we called “AI” was something different entirely.

Survey of potential analogy spaces

Capitalism

I see capitalism as an optimization engine. But any optimization engine requires boundary conditions in order to not crank out nonsensical solutions. Optimization engines are not "smart" but they do a thing that can be a useful tool in achieving smart behavior.

Adam Smith, who some call the father of modern capitalism, suggested that if you want morality in capitalism, you must encode it in law, that the engine of capitalism will not find it on its own. He predicted that absent such encoding, capitalists would tend toward being tyrants.

Raising Children

Children are much smarter than some people give them credit for. We sometimes think of kids getting smarter with age or education, but really they gain knowledge and context and, eventually, we hope, empathy. Young children can do brilliant but horrifying things, things that might hurt themselves or others, things we might call sociopathic in adults, for lack of understanding of context and consequence. We try to watch over them as they grow up, helping them grow out of this.

It's why we try kids differently than adults sometimes in court. They may fail to understand the consequences of their actions.

Presuppositions

We in the general public, the existing and future customers of “AI” are being trained by use of tools like ChatGPT to think of an “AI” as something civil because the conversations we have with them are civil. But with this new tech, all bets are off. It's just going to want to find a shorter path to the goal.

LLM technology has no model of the world at all. It is able to parrot things, to summarize things, to recombine and reformat things, and a few other interesting tricks that combine to give some truly dazzling effects. But it does not know things. Still, for this discussion, let's even suspend disbelief and assume that there is some degree of modeling going on in this new chapter of “AI” if the system thinks it can improve its score.

Raising “AI” Children

Capitalism is an example of something that vaguely models the world by assigning dollar values to a great many things. But many find ourselves routinely frustrated by capitalism because it seems to behave sociopathically. Capitalists want to keep mining oil when it's clear that it is going to drive our species extinct, for example. But it's profitable. In other words, the model says this is a better score because the model is monetary. It doesn't measure safety, happiness (or cruelty), sustainability, or a host of other factors unless a dollar score is put on those. The outcome is brutal.

My 2009 essay Fiduciary Duty vs The Three Laws of Robotics discusses in detail why this behavior by corporations is not accidental. But the essence of it is that businesses do the same thing that sociopaths do: they operate without empathy, focusing single-mindedly on themselves and their profit. In people, we call that sociopathy. Since corporations are sometimes called “legal people,” I make the case in the essay that corporations are also “legal sociopaths.”

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Young children growing up tend to be very self-focused, too. They can be cruel to one another in play, and grownups need to watch over them to make sure that appropriate boundaries are placed on them. A sense of ethics and personal responsibility does not come overnight, but a huge amount of energy goes into supervising kids before turning them loose on the world.

And so we come to AIs. There is no reason to suspect that they will perform any differently. They need these boundary conditions, these rules of manners and ethics, a sense of personal stake in the world, a sense of relation to others, a reason not to behave cruelly to people. The plan I'm hearing described, however, falls short of that. And that scares me.

I imagine they think this can come later. But this is part of the dance I have come to refer to as Technology's Ethical Two-Step. It has two parts. In the first part, ethics is seen as premature and gets delayed. In the second part, ethics is seen as too late to add retroactively. Some nations have done better than others at regulating emerging technology. The US is not a good example of that. Ethics is something that's seen as spoiling people's fun. Sadly, though, an absence of ethics can spoil more than that.

Intelligence vs Empathy

More intelligence does not imply more empathy. It doesn't even imply empathy at all.

Empathy is something you're wired for, or that you're taught. But “AI” is not wired for it and not taught it. As Adam Smith warned, we must build it in. We should not expect it to be discovered. We need to require it in law and then productively enforce that law, or we should not give it the benefit of the doubt.

Intelligence without empathy ends up just being oblivious, callous, cruel, sociopathic, evil. We need to build “AI” differently, or we need to be far more nervous and defensive about what we expect “AI” that is a product of self-directed learning to do.

Unsupervised AI Children—what could possibly go wrong?

The “AI” tech we are making right now are children, and the suggestion we're now seeing is that they be left unsupervised. That doesn't work for kids, but at least we don't give kids control of our critical systems. The urgency here is far greater because of the accelerated way that these things are finding themselves in mission-critical situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

You may also enjoy these other essays by me on related topics:

The graphic was created at abacus.ai using RouteLLM (which referred me to GPT-4.1) and rendered by GPT Image. I did post-processing in Gimp to add color and adjust brightness in places.

Saturday, July 4, 2020

Death by Smugness

Just Getting Started

The numbers are going up. To round numbers it's now about 2.5 million cases and 125,000 deaths. So about 5%.

So one in twenty of us who get it are scheduled to die until we have an effective vaccine or a cure. Meanwhile our job isn't just to avoid spreading something, but to avoid spreading something we cannot see and don't know is there.

By nature, we prefer to react to visible threats. As a species we invented science as a kind of superpower to help us with invisible threats, to let us see ahead to coming things that might matter but are beyond our senses. But as individual members of our species, we struggle with accepting the things science tells us.

2.5 million infected. It sounds like a lot. But given how easily transmitted this virus is, and given the sense of extreme urgency to “return to normal” we see played out on the news every day. It could soon enough be 250 million infected and 12 million dead. So with 5% of 2.5 million dying, we may just be getting started.

Invisibility Plays Tricks on Us

The difficulty of fighting something invisible is that you don't know if you are fighting it. You might be. You must convince yourself to behave as if every encounter mattered. Just in case.

And yet the paradox is that you become adept at thinking, "I am good at this. I am daily fighting this thing, and winning. I am an expert." It's a natural feeling. But deadly wrong.

The truth is that every experience might matter. Things we do or things we have previously done might have saved our lives. But then again, maybe not. With an invisible threat, we have no proof that anything we have done is working. The virus might simply not have reached us yet. It might be we haven't yet faced it.

It's tedious to keep taking precautions. But, unlike us, the virus is not bored with how things are going. It's patiently looking for a way in. We mustn't give it that opening.

The Avoidable Danger

Yes, some people are being stupid, and that will cost. Maybe they will get sick or die. Maybe nothing will happen directly to them but they will pass things on to others. There is probably nothing we can do to keep people who are bent on doing stupid things from actually doing them. It's not a perfect world.

But some of us are trying to do the right thing, and even we can get tricked because invisibility is hard to reason about. That is the danger I see. That is the avoidable danger. We have to make sure we're thinking right.

We've been doing this awhile now, and our urge is to declare ourselves experts. We think we've seen it. We think we're good at it. We think we can streamline it. A few people go back to work, and no one has died, so we figure we're doing it right and maybe a few more can come back. That's faulty reasoning.

We can take a test, but as soon we're out of the room where we took it, we're contacting things again. We do not go through the day with an aura of testedness protecting us. We can contract the virus on the doorknob as we leave the testing room.

The one thing we know, as there are more cases, is that there will be more chances to find out that what we are doing is insufficient. But we do not know if we are being daily stressed and our defenses are good, or if we're just lucky our neighbors have been careful, and so the virus hasn't reached us at all.

A Deadly, Paradoxical Conclusion

With more and more virus out there, we're tempted to conclude we are surviving more and more onslaught. But we cannot know. For now there is only one thing to do: Be relentlessly safe.

No, let me put that in even stronger terms. Be more safe. Don't think yourself practiced. Think of yourself as still new, still learning, still all too able to make mistakes if you fail to pay attention. Rather than try to streamline what you're doing, find ways to bolster your protections, because what you're doing so far may not be enough as numbers rise and the invisible enemy is ever more likely to be really making contact.

Some of the risk can't be avoided. The existence of people too lazy or indifferent to care may be an inevitability. But getting too smug about that can kill us, too. We need to all stay humble in the face of this, so we don't fail to address the issues that are within our control simply for not having taken the time to look for them.


If you got value from this post, please “share” it.

Sunday, August 4, 2013

Breaching the Social Contract

Part 1 | Part 2 | Part 3

When discussing the really obscene amounts of money some rich folks have amassed in the world, the justification I always hear is: “It's their due. They are the ones taking the risk, so they should get the spoils.” I understand why they say that, but I still don't buy it. It's at best an oversimplification, and at worst just a clever lie designed to put a nice face on institutionalized inequity.

Of course, lie or not, I don't doubt that the rich believe the excuse. They need to believe this wealth is rightly earned through the risks they took. It soothes their conscience to believe that. So they repeat the excuse a lot, and they find after a while that they do believe it.

“They risked everything to get there,” say others, trying to vary the wording just a little. In this form, the parallel between “risk everything” and “get everything” makes it seem ever so fair, like a carefully balanced scale, with the same quantity on both sides: “risk everything, win everything.” What could be more obvious justice? Except the “everything” risked was only one's private fortune, if one even had one at the time—they may have had little to lose—and the “everything” to be won is a lot more.

The net worth of the Walton family, who inherited the Wal-Mart retail chain, was pegged earlier this year at $115.7 billion. I somehow doubt that they risked that much to get that much. Even the late brothers Bud and Sam Walton, who founded the company and I'm sure worked quite hard, still had human limits on what they could contribute. In present day, I'll bet the combined Walton family wealth exceeds the combined wealth of every one of their employees, probably every employee past and present. Did the family work harder than all of the other employess put together? That seems an unlikely truth.

If the rich really did have a way to take unbounded risk in exchange for unbounded reward, that might be a different matter. But bankruptcy laws generally place a bound on risk, preventing folks from having to pay back more than a certain amount in really extreme cases. This allows them to start over, sometimes even more than once. We coddle our rich in ways we don't our poor.

By “rich” here, of course I mean the class of folks that feel entitled to be rich, because even when they are without money, they are rarely without a whole social network who sees them as differently poor than those who were always poor, and who understand that these particular poor need to be made rich again before all is right.

Likewise, when I speak of the “poor” in this context, I mean those who are not similarly entitled to riches by virtue of birth or connection. Odd that these should be called the “entitlement class.” The reverse would seem more apt.

Poor folks often don't actually go bankrupt. They may just get stuck in a cycle of poverty that restricts their lives, but they may not have the luxury of time, money or knowledge to do a proper bankruptcy, or even know it's possible, so they never clear their debt.

In fact, we've done to this class of people something the rich would never tolerate: We've taken the primary longshot investment they could make that might raise them out of their poverty, education, and have written it into law that if this longshot fails, they may not clear their education debt. We would never do that to a rich person's longshot. And why? Because we want to encourage entrepreneurs, I'm told. But we don't want to encourage people to invest in education?

Closer to the truth, I fear, is that those people buying the lobbyists that write our policies are comfortable that their own kids are going to end up educated, and they really just can't find it within themselves to care about anyone else being educated. In fact, they'd probably rather we have a broad underclass of exploitable poor ready to work at junk jobs, since that offers a direct profit advantage to them. Right now they have to get their ultra-cheap labor from abroad, which means managing at a distance, dealing with foreign governments, and lots of transportation costs.

Just think of the Utopia the US would be for them if only they could achieve real poverty here at home. Once minimum wage is eliminated and overtime regulations are repealed, pay could drop to a level where the rich could afford to offer tons of jobs and get everyone to shut up about unemployment. A job for everyone—maybe two or three, actually, since the pay for any one of them would never be enough. Isn't that what the liberals have wanted? Jobs? Imagine the joy the conservatives would feel in being able to satisfy that request if allowed to do it on their own terms.

And when the rich do need a few educated folks to work for them, they can always import them from other countries. There are plenty who would love to come here, and they'll take lower wages than those in the US because they grew up in a part of the world where the cost of living and of getting an education was lower. In effect, we're now outsourcing education because many of the heirs apparent to our educated jobs have gotten their degrees elsewhere. The fact that the people who are thus educated may not be American citizens is a mere detail, irrelevant to the business. And anyway, a less-talked-about aspect of modern immigration reform is the desire to ease the path to citizenship for these people, so they'll be citizens soon enough.

And, hey, I'm not xenophobic. I don't mind people coming from the outside, especially if they're going to become citizens, commit to living here and invest in our society. But I do mind a great deal using that trend as an excuse not to educate those who are already citizens. Our first responsibility is to them. If education is too expensive or ineffective here, our priority should be to make it cheaper and more effective. We can't treat our existing citizens as expendable just because it's cheaper or easier to fill STEM jobs from the outside.

Now let me come back to risk, because we were talking about education as the big risk. An education is needed for a good job, but it's not a guarantee of a good job. There are lots of people with college degrees working at retail outlets and fast food places. So getting an education is actually a huge risk, and we've allowed Congress to eliminate bankruptcy protection for those whose investment utterly fails. When the money coming in from those low-paid jobs doesn't pay back the loans, there is no escape for them as there would be for the rich when their investments fail.

Nor is that the only risk. There's the day-to-day risk of not having enough food, health care, housing, heat or air conditioning, and so on. The sad truth is that the minimum wage, which many of these people make, is not a living wage. That's also true, by the way, for some making over the minimum wage—say, minimum wage plus a buck. They're still not breaking even either, but they just don't have a catchy title like “minimum wage worker” to describe their plight. They may even be made to feel guilty for not speaking appreciatively about being above the minimum. But really, they're all in the same boat until they're at the level of a living wage.

After all, the minimum wage doesn't measure anything related to anyone's ability to survive, so being above it doesn't really mean one is somehow surviving. It just measures, through its distance from a living wage, how much we as a public are willing to stand by and watch people sink before we finally decide to care. And whether a person works at minimum wage or barely above, if they're not making a living wage, they're still running a daily deficit. Yes, deficit. The “D” word. And although the Republican Congress worries a lot about deficits, they really only worry about public deficits, and only because they themselves might have to pay. They imagine these private deficits are the result of private choices, and they're well-practiced at chiding people about the need to take responsibility for their own actions.

Never mind that these others have taken responsibility. Many got an education. Most work every day. By and large, most folks do their part of what should be our social contract: Be a good citizen, improve yourself, contribute the skills and strength you have to the general good. That should be enough that society should treat you as one of its own without insulting you by suggesting in the end that you're asking for a handout or not taking responsibility. If anyone is not taking responsibility, it's Society. We asked these people to do these things. They did what they were asked and are now beaten up for it and told they must suffer.

Implicit in our request that people work full-time should be that they be given work that will support them. Implicit in our request that people educate themselves should be that we'll find something to do with that education. And if some jobs don't require education, let's not treat the people who go that path as if they've disappointed us. Society asks different things of different people, and we need to treat everyone who does their fair share with a certain baseline respect. We've got a ways to go on that.

Meanwhile, back in the real world, the poor are stuck in situations they didn't freely choose. As Adlai Stevenson once aptly summed it up, “A hungry man is not a free man.” In bargaining for a way to survive, there is huge inequality of bargaining power. That, in turn, makes a mockery of any notion that the poor really elect their fate, and calls into question whether it's their responsibility to fix a problems they didn't create.

As Adam Smith put it in his book The Wealth of Nations:

“It is not, however, difficult to foresee which of the two parties must, upon all ordinary occasions, have the advantage in the dispute, and force the other into a compliance with their terms. The masters, being fewer in number, can combine much more easily; and the law, besides, authorizes, or at least does not prohibit their combinations, while it prohibits those of the workmen. We have no acts of parliament against combining to lower the price of work; but many against combining to raise it. In all such disputes the masters can hold out much longer. A landlord, a farmer, a master manufacturer, a merchant, though they did not employ a single workman, could generally live a year or two upon the stocks which they have already acquired. Many workmen could not subsist a week, few could subsist a month, and scarce any a year without employment. In the long run the workman may be as necessary to his master as his master is to him; but the necessity is not so immediate.”

It therefore falls to those of us who are not economically disempowered to speak in support of those who are, to acknowledge the legitimacy of their plight, and to stop insulting them by saying they should take responsibility for their actions. Of necessity, they take responsibility each and every day. They're not failing us. We're failing them. And it's time we took some responsibility.


Author's Note: If you got value from this post, please “Share” it.

This first part of a 3-part series was originally published August 4, 2013 at Open Salon, where I wrote under my own name, Kent Pitman.

The other articles in this series were:
Lien Times for Startups (part 2)
The Overtime Loophole (part 3)

The Adam Smith quote was borrowed from the Wikipedia entry, “Inequality of Bargaining Power.” It's quite a fascinating entry full of very instructive and powerfully-expressed quotations. If you have the time, I recommend that article as important reading.

Tags (from Open Salon): politics, social contract, bankruptcy, risk, reward, responsibility, entitlement, minimum wage, living wage, education, investment, inequity, walton, wal-mart, rich, poor, wealthy, class, entrepreneur, entrepreneurship, failure, success, reward, punishment, penalty, poverty, cycle of poverty, immigration, xenophobe, xenophobia, xenophobic, jobs, employment, unemployment, stem, college, degree, tuition, cost of education, deficit, congress, bargaining, inequality of bargaining, duress, adam smith, adlai stevenson