Showing posts with label sociopath. Show all posts
Showing posts with label sociopath. Show all posts

Sunday, May 18, 2025

Unsupervised AI Children

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Recent “AI” hype

Since the introduction of the Large Language Model (LLM), the pace of new tools and technologies has been breathtaking. Those who are not producing such tech are scrambling to figure out how to use it. Literally every day there's something new.

Against this backdrop, Google has recently announced a technology it calls AlphaEvolve, which it summarizes as “a Gemini-powered coding agent for designing advanced algorithms” According to one of its marketing pages:

“Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.»

Early Analysis

The effects of such new technologies are hard to predict, but let's start what's already been written.

In an article in ars technica, tech reporter Ryan Whitwam says of the tech:

«When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.»

It's interesting to note that I found this commentary by Whitwam from AlphaEvolve's Wikipedia page, which had already re-summarized what he said as this (bold mine to establish a specific focus):

«its architecture allows it to evaluate code programmatically, reducing reliance on human input and mitigating risks such as hallucinations common in standard LLM outputs.»

Whitwam actually hadn't actually said “mitigating risks,” though he may have meant it. His more precise language, “improving accuracy” speaks to a much narrower goal of specific optimization of modeled algorithms, and not to the broader area of risk. These might seem the same, but I don't think they are.

To me—and I'm not a formal expert, just someone who's spent a lifetime thinking about computer tech ethics informally—risk modeling has to include a lot of other things, but most specifically questions of how well the chosen model really captures the real problem to be solved. LLMs give the stagecraft illusion of speaking fluidly about the world itself in natural language terms, and that creates all kinds of risks of simple misunderstanding between people because of the chosen language, as well as failures to capture all parts of the world in the model.

Old ideas dressed up in a new suit

In a post about this tech on LinkedIn, my very thoughtful and rigorously meticulous friend David Reed writes:

«30 years ago, there was a craze in computing about Evolutionary Algorithms. That is, codes that were generated by random modification of the source code structure and tested against an “environment” which was a validation test. It was a heuristic search over source code variations against a “quality” or “performance” measure. Nothing new here at all, IMO, except it is called “AI” now.»

I admit haven't looked at the tech in detail, but I trust Reed's assertion that the current interation of the tech is primarily less grandiose than Google's hype suggests—at least for now.

But that doesn't mean more isn't coming. And by more, I don't necessarily mean smarter. But I do mean that it will be irresistible for technologists to turn this tech upon itself and try exactly what Google sounds like it's wanting to claim here: that unsupervised evolutionary learning will soon mean “AI”—in the ‘person’ of LLMs—can think and evolve on their own.

Personally, I'm confused by why people even see it as a good goal, as I discussed in my essay Sentience Structure. You can read that essay if you want the detail, so I won't belabor that point here. I guess it comes down to some combination of a kind of euphoria that some people have over just doing something new combined with a serious commercial pressure to be the one who invents the next killer app.

I just hope it's not literally that—an app that's a killer.

Bootstrapping analysis by analogy

In areas of new thought, I reason by analogy to situations of similar structure in order to derive some sense of what to expect, by observing what happens in analogy space and then projecting back into the real world to what might happen with the analogously situated artifacts. Coincidentally, it's a technique I learned from a paper (MIT AIM-520) written by Pat Winston, head of the MIT AI lab back when I was studying and working there long ago — when what we called “AI” was something different entirely.

Survey of potential analogy spaces

Capitalism

I see capitalism as an optimization engine. But any optimization engine requires boundary conditions in order to not crank out nonsensical solutions. Optimization engines are not "smart" but they do a thing that can be a useful tool in achieving smart behavior.

Adam Smith, who some call the father of modern capitalism, suggested that if you want morality in capitalism, you must encode it in law, that the engine of capitalism will not find it on its own. He predicted that absent such encoding, capitalists would tend toward being tyrants.

Raising Children

Children are much smarter than some people give them credit for. We sometimes think of kids getting smarter with age or education, but really they gain knowledge and context and, eventually, we hope, empathy. Young children can do brilliant but horrifying things, things that might hurt themselves or others, things we might call sociopathic in adults, for lack of understanding of context and consequence. We try to watch over them as they grow up, helping them grow out of this.

It's why we try kids differently than adults sometimes in court. They may fail to understand the consequences of their actions.

Presuppositions

We in the general public, the existing and future customers of “AI” are being trained by use of tools like ChatGPT to think of an “AI” as something civil because the conversations we have with them are civil. But with this new tech, all bets are off. It's just going to want to find a shorter path to the goal.

LLM technology has no model of the world at all. It is able to parrot things, to summarize things, to recombine and reformat things, and a few other interesting tricks that combine to give some truly dazzling effects. But it does not know things. Still, for this discussion, let's even suspend disbelief and assume that there is some degree of modeling going on in this new chapter of “AI” if the system thinks it can improve its score.

Raising “AI” Children

Capitalism is an example of something that vaguely models the world by assigning dollar values to a great many things. But many find ourselves routinely frustrated by capitalism because it seems to behave sociopathically. Capitalists want to keep mining oil when it's clear that it is going to drive our species extinct, for example. But it's profitable. In other words, the model says this is a better score because the model is monetary. It doesn't measure safety, happiness (or cruelty), sustainability, or a host of other factors unless a dollar score is put on those. The outcome is brutal.

My 2009 essay Fiduciary Duty vs The Three Laws of Robotics discusses in detail why this behavior by corporations is not accidental. But the essence of it is that businesses do the same thing that sociopaths do: they operate without empathy, focusing single-mindedly on themselves and their profit. In people, we call that sociopathy. Since corporations are sometimes called “legal people,” I make the case in the essay that corporations are also “legal sociopaths.”

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Young children growing up tend to be very self-focused, too. They can be cruel to one another in play, and grownups need to watch over them to make sure that appropriate boundaries are placed on them. A sense of ethics and personal responsibility does not come overnight, but a huge amount of energy goes into supervising kids before turning them loose on the world.

And so we come to AIs. There is no reason to suspect that they will perform any differently. They need these boundary conditions, these rules of manners and ethics, a sense of personal stake in the world, a sense of relation to others, a reason not to behave cruelly to people. The plan I'm hearing described, however, falls short of that. And that scares me.

I imagine they think this can come later. But this is part of the dance I have come to refer to as Technology's Ethical Two-Step. It has two parts. In the first part, ethics is seen as premature and gets delayed. In the second part, ethics is seen as too late to add retroactively. Some nations have done better than others at regulating emerging technology. The US is not a good example of that. Ethics is something that's seen as spoiling people's fun. Sadly, though, an absence of ethics can spoil more than that.

Intelligence vs Empathy

More intelligence does not imply more empathy. It doesn't even imply empathy at all.

Empathy is something you're wired for, or that you're taught. But “AI” is not wired for it and not taught it. As Adam Smith warned, we must build it in. We should not expect it to be discovered. We need to require it in law and then productively enforce that law, or we should not give it the benefit of the doubt.

Intelligence without empathy ends up just being oblivious, callous, cruel, sociopathic, evil. We need to build “AI” differently, or we need to be far more nervous and defensive about what we expect “AI” that is a product of self-directed learning to do.

Unsupervised AI Children—what could possibly go wrong?

The “AI” tech we are making right now are children, and the suggestion we're now seeing is that they be left unsupervised. That doesn't work for kids, but at least we don't give kids control of our critical systems. The urgency here is far greater because of the accelerated way that these things are finding themselves in mission-critical situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

You may also enjoy these other essays by me on related topics:

The graphic was created at abacus.ai using RouteLLM (which referred me to GPT-4.1) and rendered by GPT Image. I did post-processing in Gimp to add color and adjust brightness in places.

Wednesday, July 27, 2011

Sociopaths by Proxy

The Center for Media and Democracy (CMD) recently ran an exposé about American Legislative Exchange Council (ALEC), a back room coalition of Republican legislators who meet to create “model” legislation which can then be pushed on a state-by-state basis in coordinated fashion. In an open letter, the CMD’s executive director, Lisa Graves, writes:

At an extravagant hotel gilded just before the Great Depression, corporate executives from the tobacco giant R.J. Reynolds, State Farm Insurance, and other corporations were joined by their "task force" co-chairs -- all Republican state legislators -- to approve "model" legislation. They jointly head task forces of what is called the "American Legislative Exchange Council" (ALEC).

There, as the Center for Media and Democracy has learned, these corporate-politician committees secretly voted on bills to rewrite numerous state laws. According to the documents we have posted to ALEC Exposed, corporations vote as equals with elected politicians on these bills. These task forces target legal rules that reach into almost every area of American life: worker and consumer rights, education, the rights of Americans injured or killed by corporations, taxes, health care, immigration, and the quality of the air we breathe and the water we drink.

It is a worrisome marriage of corporations and politicians, which seems to normalize a kind of corruption of the legislative process -- of the democratic process--in a nation of free people where the government is supposed to be of, by, and for the people, not the corporations.

The full sweep of the bills and their implications for America's future, the corporate voting, and the extent of the corporate subsidy of ALEC's legislation laundering all raise substantial questions. These questions should concern all Americans. They go to the heart of the health of our democracy and the direction of our country. When politicians -- no matter their party -- put corporate profits above the real needs of the people who elected them, something has gone very awry.

. . . ALEC apparently ignores Smith's caution that bills and regulations from business must be viewed with the deepest skepticism. In his book, "Wealth of Nations," Smith urged that any law proposed by businessmen "ought always to be listened to with great precaution . . . It comes from an order of men, whose interest is never exactly the same with that of the public, who have generally an interest to deceive and even to oppress the public, and who accordingly have, upon many occasions, both deceived and oppressed it."

One need not look far in the ALEC bills to find reasons to be deeply concerned and skeptical. Take a look for yourself.

In my article Fiduciary Duty vs. The Three Laws of Robotics, I took the position that not only are corporations legal people, but in fact they are “legal sociopaths.” That is, they are by fixed nature incapable of caring about their employees, their customers, or their community except insofar as such caring accidentally maximizes value of the corporation for its stockholders.

I've also argued in the past, as in my 2008 article Election Stratego, that the Republican party is trending toward running strategic configurations of players, who are really just game pieces for other entities coordinating matters behind the scenes. Others have referred to this same phenomenon by talking about puppet governments, shadow governments, or plutocracies. Once the stuff of conspiracy theorists, recent reports and analyses seem to increasingly suggest that the practice of corporations purchasing legislation is becoming a reality. ALEC is only the most recent example. There's the influence of the Family, the Koch Brothers, and Grover Norquist, and other people and corporations with seemingly disproportionate interest and power in modern politics.

The Citizens United ruling by the Supreme Court has seemed not only to legitimize these activities, but to ignite a fire in them. They can now operate much more in the open than before. Events we've seen in Wisconsin and in Michigan are just a few prominent examples of increasingly organized attempts that are going on nationwide that seem single-mindedly bent on bringing American workers to their collective knees.

In her recent article Obama fights full-tilt Tea Party crazy, Joan Walsh suggested “the president is dealing with a conscience-free opposition.” Reading this, something clicked in my mind that connected up this notion I have of corporations as sociopaths, and I realized the cancer has spread, so now due to this effect of politicians being bought off by corporations, we not only have corporations acting as sociopaths, but we have politicians hell bent on doing the bidding of these corporations. And if the corporations are, as I've argued, sociopaths, then these all-too-willing servants of the corporations are almost literally “sociopaths by proxy.”

And this is especially bad because government is really the only entity that exists as a counterweight to the forces of business. Government regulation is, by design, capable of regulating industry in order to assure the general welfare. Yet if these businesses are by nature singularly interested in their stockholders' needs and in general obliged not to care the concerns of other stakeholders (such as their customers, their employees, or the communities in which the corporation resides and operates), then who is to look out for the individual? A single individual is often too small to stand up to a corporation in any test of wills. And with legislative action afoot to systematically dismantle and disempower labor unions and to reduce or eliminate the ability to bring class action, good old-fashioned government regulation is the last line of defense for the ordinary citizen—protecting, even if imperfectly, against the tendency of business to exploit and oppress populations for monetary gain.

I've heard it suggested that government should do for people only what people cannot do for themselves. But individual citizens cannot keep banks from adopting predatory lending practices. They can't keep oil companies from using unsafe drilling practices. They can't make sure the food we eat is safe. There are a great many protections that government has traditionally seen as their duty to provide, and yet we're watching an organized attempt by certain politicians—in eager service of corporations—to eliminate the FDA, the EPA, and even the newly created Consumer Financial Protection Bureau. They speak of “starving the beast,” but in the end the ones starving if this keeps up will be us, the American citizens.

America is under attack from within by forces that do not have the best interests of American citizens at heart, indeed by entities that have no heart at all—by corporations—legal sociopaths—and their dutiful servants in Washington, the Republican Party. The Republicans fancy themselves leaders, but they are not leading, they're clearly following. If they step out of line, they're harshly dealt with by forces outside of our view or control.

The Democratic Party is not immune to the suggestions of Big Business either, but at least they are not yet moving in 100% lockstep to the tune of their corporate overlords. In spite of some partial influence, many elected Democrats are still advocating strongly on behalf of the common citizen. So at least with the Democrats there is hope.

And let's be clear, I'm not saying that this new class of Republican “leaders” are themselves sociopaths. It's not inconceivable that some are, but let's generously assume not, since it won't change my point. Whether they are themselves sociopaths or just willing proxies for behind-the-scenes sociopaths, it's all the same. America's citizens need and deserve a government of, by and for the people—the real flesh and blood people, the ones the founders of this nation originally wrote the Constitution to protect.


Author's Note: If you got value from this post, please “Share” it.

Originally published July 27, 2011 at Open Salon, where I wrote under my own name, Kent Pitman.

Past Articles by me on Related Topics
To Serve Our Citizens
Fiduciary Duty vs. The Three Laws of Robotics
Teetering on the Brink of Moral Bankruptcy
Hollow Support
Election Stratego

Tags (from Open Salon): politics, legal sociopath, sociopath by proxy, center for media and democracy, cmd, american legislative exchange council, alec, control, power, power grab, protections, dismantling, attack, attack from within, people, we the people, of by and for the people, corporations, corporatism, plutocracy, shadow government, puppet government, puppet state, koch brothers, the family, c street