Showing posts with label legal sociopath. Show all posts
Showing posts with label legal sociopath. Show all posts

Sunday, May 18, 2025

Unsupervised AI Children

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Recent “AI” hype

Since the introduction of the Large Language Model (LLM), the pace of new tools and technologies has been breathtaking. Those who are not producing such tech are scrambling to figure out how to use it. Literally every day there's something new.

Against this backdrop, Google has recently announced a technology it calls AlphaEvolve, which it summarizes as “a Gemini-powered coding agent for designing advanced algorithms” According to one of its marketing pages:

“Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.»

Early Analysis

The effects of such new technologies are hard to predict, but let's start what's already been written.

In an article in ars technica, tech reporter Ryan Whitwam says of the tech:

«When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.»

It's interesting to note that I found this commentary by Whitwam from AlphaEvolve's Wikipedia page, which had already re-summarized what he said as this (bold mine to establish a specific focus):

«its architecture allows it to evaluate code programmatically, reducing reliance on human input and mitigating risks such as hallucinations common in standard LLM outputs.»

Whitwam actually hadn't actually said “mitigating risks,” though he may have meant it. His more precise language, “improving accuracy” speaks to a much narrower goal of specific optimization of modeled algorithms, and not to the broader area of risk. These might seem the same, but I don't think they are.

To me—and I'm not a formal expert, just someone who's spent a lifetime thinking about computer tech ethics informally—risk modeling has to include a lot of other things, but most specifically questions of how well the chosen model really captures the real problem to be solved. LLMs give the stagecraft illusion of speaking fluidly about the world itself in natural language terms, and that creates all kinds of risks of simple misunderstanding between people because of the chosen language, as well as failures to capture all parts of the world in the model.

Old ideas dressed up in a new suit

In a post about this tech on LinkedIn, my very thoughtful and rigorously meticulous friend David Reed writes:

«30 years ago, there was a craze in computing about Evolutionary Algorithms. That is, codes that were generated by random modification of the source code structure and tested against an “environment” which was a validation test. It was a heuristic search over source code variations against a “quality” or “performance” measure. Nothing new here at all, IMO, except it is called “AI” now.»

I admit haven't looked at the tech in detail, but I trust Reed's assertion that the current interation of the tech is primarily less grandiose than Google's hype suggests—at least for now.

But that doesn't mean more isn't coming. And by more, I don't necessarily mean smarter. But I do mean that it will be irresistible for technologists to turn this tech upon itself and try exactly what Google sounds like it's wanting to claim here: that unsupervised evolutionary learning will soon mean “AI”—in the ‘person’ of LLMs—can think and evolve on their own.

Personally, I'm confused by why people even see it as a good goal, as I discussed in my essay Sentience Structure. You can read that essay if you want the detail, so I won't belabor that point here. I guess it comes down to some combination of a kind of euphoria that some people have over just doing something new combined with a serious commercial pressure to be the one who invents the next killer app.

I just hope it's not literally that—an app that's a killer.

Bootstrapping analysis by analogy

In areas of new thought, I reason by analogy to situations of similar structure in order to derive some sense of what to expect, by observing what happens in analogy space and then projecting back into the real world to what might happen with the analogously situated artifacts. Coincidentally, it's a technique I learned from a paper (MIT AIM-520) written by Pat Winston, head of the MIT AI lab back when I was studying and working there long ago — when what we called “AI” was something different entirely.

Survey of potential analogy spaces

Capitalism

I see capitalism as an optimization engine. But any optimization engine requires boundary conditions in order to not crank out nonsensical solutions. Optimization engines are not "smart" but they do a thing that can be a useful tool in achieving smart behavior.

Adam Smith, who some call the father of modern capitalism, suggested that if you want morality in capitalism, you must encode it in law, that the engine of capitalism will not find it on its own. He predicted that absent such encoding, capitalists would tend toward being tyrants.

Raising Children

Children are much smarter than some people give them credit for. We sometimes think of kids getting smarter with age or education, but really they gain knowledge and context and, eventually, we hope, empathy. Young children can do brilliant but horrifying things, things that might hurt themselves or others, things we might call sociopathic in adults, for lack of understanding of context and consequence. We try to watch over them as they grow up, helping them grow out of this.

It's why we try kids differently than adults sometimes in court. They may fail to understand the consequences of their actions.

Presuppositions

We in the general public, the existing and future customers of “AI” are being trained by use of tools like ChatGPT to think of an “AI” as something civil because the conversations we have with them are civil. But with this new tech, all bets are off. It's just going to want to find a shorter path to the goal.

LLM technology has no model of the world at all. It is able to parrot things, to summarize things, to recombine and reformat things, and a few other interesting tricks that combine to give some truly dazzling effects. But it does not know things. Still, for this discussion, let's even suspend disbelief and assume that there is some degree of modeling going on in this new chapter of “AI” if the system thinks it can improve its score.

Raising “AI” Children

Capitalism is an example of something that vaguely models the world by assigning dollar values to a great many things. But many find ourselves routinely frustrated by capitalism because it seems to behave sociopathically. Capitalists want to keep mining oil when it's clear that it is going to drive our species extinct, for example. But it's profitable. In other words, the model says this is a better score because the model is monetary. It doesn't measure safety, happiness (or cruelty), sustainability, or a host of other factors unless a dollar score is put on those. The outcome is brutal.

My 2009 essay Fiduciary Duty vs The Three Laws of Robotics discusses in detail why this behavior by corporations is not accidental. But the essence of it is that businesses do the same thing that sociopaths do: they operate without empathy, focusing single-mindedly on themselves and their profit. In people, we call that sociopathy. Since corporations are sometimes called “legal people,” I make the case in the essay that corporations are also “legal sociopaths.”

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Young children growing up tend to be very self-focused, too. They can be cruel to one another in play, and grownups need to watch over them to make sure that appropriate boundaries are placed on them. A sense of ethics and personal responsibility does not come overnight, but a huge amount of energy goes into supervising kids before turning them loose on the world.

And so we come to AIs. There is no reason to suspect that they will perform any differently. They need these boundary conditions, these rules of manners and ethics, a sense of personal stake in the world, a sense of relation to others, a reason not to behave cruelly to people. The plan I'm hearing described, however, falls short of that. And that scares me.

I imagine they think this can come later. But this is part of the dance I have come to refer to as Technology's Ethical Two-Step. It has two parts. In the first part, ethics is seen as premature and gets delayed. In the second part, ethics is seen as too late to add retroactively. Some nations have done better than others at regulating emerging technology. The US is not a good example of that. Ethics is something that's seen as spoiling people's fun. Sadly, though, an absence of ethics can spoil more than that.

Intelligence vs Empathy

More intelligence does not imply more empathy. It doesn't even imply empathy at all.

Empathy is something you're wired for, or that you're taught. But “AI” is not wired for it and not taught it. As Adam Smith warned, we must build it in. We should not expect it to be discovered. We need to require it in law and then productively enforce that law, or we should not give it the benefit of the doubt.

Intelligence without empathy ends up just being oblivious, callous, cruel, sociopathic, evil. We need to build “AI” differently, or we need to be far more nervous and defensive about what we expect “AI” that is a product of self-directed learning to do.

Unsupervised AI Children—what could possibly go wrong?

The “AI” tech we are making right now are children, and the suggestion we're now seeing is that they be left unsupervised. That doesn't work for kids, but at least we don't give kids control of our critical systems. The urgency here is far greater because of the accelerated way that these things are finding themselves in mission-critical situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

You may also enjoy these other essays by me on related topics:

The graphic was created at abacus.ai using RouteLLM (which referred me to GPT-4.1) and rendered by GPT Image. I did post-processing in Gimp to add color and adjust brightness in places.

Sunday, January 22, 2012

Losing the War in a Quiet Room

The Occupy Wall Street (OWS) movement has seemed to tap into a deep-rooted sense discontent in the American populace over how capitalism has gone wrong. Criticism has not come just from the Left, but also from the right, as recently discussed on the excellent new MSNBC show Up with Chris Hayes:

[VIDEO TEMPORARILY MISSING FOR TECHNICAL REASONS.
SEND KENT A REMINDER TO FIX THIS.]

Visit msnbc.com for breaking news, world news, and news about the economy

The discussion begins with a quote from Newt Gingrich asking “Is capitalism really about the ability of a handful of rich people to manipulate the lives of thousands of other people and walk off with the money, or is that somehow a little bit of a flawed system?” To which Chris Hayes cheerfully responds, “Well, yes, Newt it is.” The discussion that follows is typical of the many thoughtful exchanges that make this show such an absolute “must watch.”

Early in the discussion, Prof. Anne-Marie Slaughter of Princeton University, asks “What’s the opposite of ‘predatory capitalism’?” and chuckles about whether that means a kind of “kinder, gentler capitalism.” Alexis McGill Johnson of the American Values Institute frames the issue as a sort of nostalgia for something lost, and David Roberts of Grist opines that “democratic nostalgia is for a set of laws and regulations that used to restrain capitalism; the republican nostalgia seems to be for nicer corporate titans, to an era of public-spirited rich people.” Vincent Warren of the Center for Constitutional Rights, questions whether the system has adequate benefit for workers, noting that the only thing workers get out of capitalism is jobs, but they don’t get economic benefits or any control of the direction companies take.

It all begs the question: What changed?

My immediate thought on that question came from having listened to the book The Betrayal of American Prosperity by Clyde Prestowitz. In the book, Prestowitz offers the following account that struck me as simply extraordinary:

Excerpt from pages 198-199 of
The Betrayal of American Prosperity by Clyde Prestowitz

THE HARVARD BUSINESS SCHOOL CREED

At the founding of Harvard Business School in 1908, Dean Edwin Gay said the purpose of the school was to teach business leaders how to “make a decent profit by doing decent business.” That was McCabe’s creed and what thousands of future business leaders learned at Harvard for many years. But in 1970, the University of Chicago’s Milton Friedman sounded a different note. Said he, “Few trends could so much undermine our free society as the acceptance by corporate executives of social responsibility other than to make as much money for shareholders as possible.” This tune was quickly picked up and elaborated by Harvard’s professors and especially by Michael Jensen, who became the dominant American voice on corporate architecture and the proper role of a board of directors and a CEO.

In a hugely influential 1976 paper and subsequently, Jensen propagated Friedman’s doctrine of shareholder sovereignty and of increased returns to shareholders as the sole purpose of the CEO. His argument was grounded in the view that the shareholder is the corporation’s final risk bearer and therefore also its final claimant. He added the notion that, as agents of shareholders, the corporation’s managers do not necessarily share the interests of the shareholders. Indeed, the managers and the shareholders may be at war because the way for the CEO to maximize his/her private gain may be at odds with maximizing shareholder gains. For instance, a CEO may like corporate jets or want to be part of the society scene, but the costs of such indulgence may be a burden to shareholders. Thus, the central problem is how to align the interests of managers and shareholders and to establish a monitoring mechanism that easily indicates whether the managers are acting properly on behalf of shareholders.

Jensen’s solution was to grant gobs of stock options to CEOs to evaluate their job performance by focusing on the progression of quarterly earnings. This is a single, readily available, objective number upon which a CEO can concentrate all her attention and which the shareholder can readily use to determine whether a CEO is working for him. Jensen emphatically rejected stakeholder theories on the grounds that giving a CEO multiple objectives would be confusing, distracting, and make it impossible in the end to measure performance.

In effect, Prestowitz is noting that this is a recurrence of the old joke

“If you dropped your keys over there,
   why are you looking here?”

“Because the light is better here.”

If I’m hearing him right, Prestowitz is making the bold claim that the reason we stopped caring about people other than shareholders was it was just too messy to do the accounting of worrying about other stakeholders, such as employees, customers, and community. It was administratively simpler and cleaner to only worry about stockholders, and so one day business just quietly decided to do that instead.

Or that was the stated rationale, anyway. Let’s not overlook the outside chance that those pushing for this change fancied themselves the potential later recipients of “gobs of stock options” as CEO of some company operating under the newly proposed rules. No point in mentioning that rationale out loud during the debate when they could stick to the altruistic-sounding story of how this focus on clarity of measurement would just be good for business. “Let's give them gobs of stock” sounds so much more business-like and less self-indulgent than “Let's give ourselves gobs of stock.”

Imagine if we took that “clarity” approach toward our justice system, saying it was too hard to measure justice so why not just measure, let’s say, cost? That wouldn’t fix old-fashioned Justice but it would create a form of NeoJustice that was so much easier to measure, allowing us to be sure we were being successful at it. But to what end?

Really that’s what happened, too. Not with criminal justice but with economic justice. We just let it go, without even knowing it. Without any real notice to or approval by the large community of American citizens affected by the change, American Business just quite literally stopped caring. It’s pretty obvious, at least to me, that this timeline Prestowitz mentions dovetails precisely with the downfall of American society so evident all around us.

A war was fought in a “quiet room” somewhere, without anyone firing a shot, and we’re now living in the aftermath of our unwitting capitulation. No wonder we’re confused about how we got here.


Possible Follow-up Actions

Putting things to right could begin by undoing the Supreme Court ruling in Citizens United, and eliminating from the law any notion of corporate personhood. Senator Bernie Sanders is pushing for a Constitutional amendment doing so. You can sign his petition supporting this amendment.

Another concrete action is to learn about stakeholder theory and start to ask questions about why it’s there and whether we could change it. It was changed before, and it seems to me it could change again. I don’t know the process by which that would happen. But I think it needs to.

Further Reading

The Betrayal of American Prosperity by Clyde Prestowitz covers additional issues, particularly those of US trade policy, in addition to the matters I’ve discussed here. In some ways, this was just a peripheral aspect of his main point. But it’s an excellent book either way and I very much recommend it. I listened to it as an audiobook from audible.com.

A basic overview of some of these issues can be obtained from Wikipedia articles titled “Stakeholder (corporate)” and “stakeholder theory.”

I also highly recommend Naomi Klein’s excellent book The Shock Doctrine: The Rise of Disaster Capitalism, in which Milton Friedman and the Chicago School (a.k.a. the “Chicago Boys”) play a critical role. I listened to it as an audiobook from audible.com.

And, finally, my other articles Fiduciary Duty vs. The Three Laws of Robotics and Sociopaths by Proxy may also shed some additional light in why this all matters.


Author's Note: Originally published January 22, 2012 at Open Salon, where I wrote under my own name, Kent Pitman.

Tags (from Open Salon): ows, wall street, occupy wall street, occupy, legal sociopath, naomi klein, shock doctrine, clyde prestowitz, betrayal of american prosperity, constitutional amendment, amendment, constitution, citizens united, sanders, michael jensen, harvard business school creed, creed, harvard business school, harvard, nostalgia, romney, newt gingrich, gingrich, newt, chris l hayes, christopher l hayes, christopher hayes, chris hayes, grist, david roberts, princeton, anne-marie slaughter, american values institute, alexis mcgill johnson, center for constitutional rights, vincent warren, up with chris hayes, chicago boys, chicago school, milton friedman, friedman, shareholder model, stakeholder model, shareholder theory, stakeholder theory, shareholder, stakeholder, quiet war, quiet room, economic justice, justice, war, fiduciary responsibility, fiduciary duty, corporate, corporation, finance, economics, business, politics, lose, losing, lost, society, war in a quiet room

Wednesday, July 27, 2011

Sociopaths by Proxy

The Center for Media and Democracy (CMD) recently ran an exposé about American Legislative Exchange Council (ALEC), a back room coalition of Republican legislators who meet to create “model” legislation which can then be pushed on a state-by-state basis in coordinated fashion. In an open letter, the CMD’s executive director, Lisa Graves, writes:

At an extravagant hotel gilded just before the Great Depression, corporate executives from the tobacco giant R.J. Reynolds, State Farm Insurance, and other corporations were joined by their "task force" co-chairs -- all Republican state legislators -- to approve "model" legislation. They jointly head task forces of what is called the "American Legislative Exchange Council" (ALEC).

There, as the Center for Media and Democracy has learned, these corporate-politician committees secretly voted on bills to rewrite numerous state laws. According to the documents we have posted to ALEC Exposed, corporations vote as equals with elected politicians on these bills. These task forces target legal rules that reach into almost every area of American life: worker and consumer rights, education, the rights of Americans injured or killed by corporations, taxes, health care, immigration, and the quality of the air we breathe and the water we drink.

It is a worrisome marriage of corporations and politicians, which seems to normalize a kind of corruption of the legislative process -- of the democratic process--in a nation of free people where the government is supposed to be of, by, and for the people, not the corporations.

The full sweep of the bills and their implications for America's future, the corporate voting, and the extent of the corporate subsidy of ALEC's legislation laundering all raise substantial questions. These questions should concern all Americans. They go to the heart of the health of our democracy and the direction of our country. When politicians -- no matter their party -- put corporate profits above the real needs of the people who elected them, something has gone very awry.

. . . ALEC apparently ignores Smith's caution that bills and regulations from business must be viewed with the deepest skepticism. In his book, "Wealth of Nations," Smith urged that any law proposed by businessmen "ought always to be listened to with great precaution . . . It comes from an order of men, whose interest is never exactly the same with that of the public, who have generally an interest to deceive and even to oppress the public, and who accordingly have, upon many occasions, both deceived and oppressed it."

One need not look far in the ALEC bills to find reasons to be deeply concerned and skeptical. Take a look for yourself.

In my article Fiduciary Duty vs. The Three Laws of Robotics, I took the position that not only are corporations legal people, but in fact they are “legal sociopaths.” That is, they are by fixed nature incapable of caring about their employees, their customers, or their community except insofar as such caring accidentally maximizes value of the corporation for its stockholders.

I've also argued in the past, as in my 2008 article Election Stratego, that the Republican party is trending toward running strategic configurations of players, who are really just game pieces for other entities coordinating matters behind the scenes. Others have referred to this same phenomenon by talking about puppet governments, shadow governments, or plutocracies. Once the stuff of conspiracy theorists, recent reports and analyses seem to increasingly suggest that the practice of corporations purchasing legislation is becoming a reality. ALEC is only the most recent example. There's the influence of the Family, the Koch Brothers, and Grover Norquist, and other people and corporations with seemingly disproportionate interest and power in modern politics.

The Citizens United ruling by the Supreme Court has seemed not only to legitimize these activities, but to ignite a fire in them. They can now operate much more in the open than before. Events we've seen in Wisconsin and in Michigan are just a few prominent examples of increasingly organized attempts that are going on nationwide that seem single-mindedly bent on bringing American workers to their collective knees.

In her recent article Obama fights full-tilt Tea Party crazy, Joan Walsh suggested “the president is dealing with a conscience-free opposition.” Reading this, something clicked in my mind that connected up this notion I have of corporations as sociopaths, and I realized the cancer has spread, so now due to this effect of politicians being bought off by corporations, we not only have corporations acting as sociopaths, but we have politicians hell bent on doing the bidding of these corporations. And if the corporations are, as I've argued, sociopaths, then these all-too-willing servants of the corporations are almost literally “sociopaths by proxy.”

And this is especially bad because government is really the only entity that exists as a counterweight to the forces of business. Government regulation is, by design, capable of regulating industry in order to assure the general welfare. Yet if these businesses are by nature singularly interested in their stockholders' needs and in general obliged not to care the concerns of other stakeholders (such as their customers, their employees, or the communities in which the corporation resides and operates), then who is to look out for the individual? A single individual is often too small to stand up to a corporation in any test of wills. And with legislative action afoot to systematically dismantle and disempower labor unions and to reduce or eliminate the ability to bring class action, good old-fashioned government regulation is the last line of defense for the ordinary citizen—protecting, even if imperfectly, against the tendency of business to exploit and oppress populations for monetary gain.

I've heard it suggested that government should do for people only what people cannot do for themselves. But individual citizens cannot keep banks from adopting predatory lending practices. They can't keep oil companies from using unsafe drilling practices. They can't make sure the food we eat is safe. There are a great many protections that government has traditionally seen as their duty to provide, and yet we're watching an organized attempt by certain politicians—in eager service of corporations—to eliminate the FDA, the EPA, and even the newly created Consumer Financial Protection Bureau. They speak of “starving the beast,” but in the end the ones starving if this keeps up will be us, the American citizens.

America is under attack from within by forces that do not have the best interests of American citizens at heart, indeed by entities that have no heart at all—by corporations—legal sociopaths—and their dutiful servants in Washington, the Republican Party. The Republicans fancy themselves leaders, but they are not leading, they're clearly following. If they step out of line, they're harshly dealt with by forces outside of our view or control.

The Democratic Party is not immune to the suggestions of Big Business either, but at least they are not yet moving in 100% lockstep to the tune of their corporate overlords. In spite of some partial influence, many elected Democrats are still advocating strongly on behalf of the common citizen. So at least with the Democrats there is hope.

And let's be clear, I'm not saying that this new class of Republican “leaders” are themselves sociopaths. It's not inconceivable that some are, but let's generously assume not, since it won't change my point. Whether they are themselves sociopaths or just willing proxies for behind-the-scenes sociopaths, it's all the same. America's citizens need and deserve a government of, by and for the people—the real flesh and blood people, the ones the founders of this nation originally wrote the Constitution to protect.


Author's Note: If you got value from this post, please “Share” it.

Originally published July 27, 2011 at Open Salon, where I wrote under my own name, Kent Pitman.

Past Articles by me on Related Topics
To Serve Our Citizens
Fiduciary Duty vs. The Three Laws of Robotics
Teetering on the Brink of Moral Bankruptcy
Hollow Support
Election Stratego

Tags (from Open Salon): politics, legal sociopath, sociopath by proxy, center for media and democracy, cmd, american legislative exchange council, alec, control, power, power grab, protections, dismantling, attack, attack from within, people, we the people, of by and for the people, corporations, corporatism, plutocracy, shadow government, puppet government, puppet state, koch brothers, the family, c street

Monday, February 2, 2009

Fiduciary Duty vs. The Three Laws of Robotics

In our society, those entrusted with control of a corporation are bound by a fiduciary duty to the stockholders. This duty is paramount and cannot be ignored to suit the personal morals or conscience of those who exercise the control; any attempt to follow personal conscience over stockholder rights might potentially be regarded as a breach of fiduciary responsibility.

“A fiduciary must not put himself in a position where his interest and duty conflict.”
   —Wikipedia

As a consequence of this rule, corporations often behave in a way that favors the survival of the company at the expense of individuals. (Although, as Greenspan alluded to in his shocked near-apology in October 2008, there are nuances even within attempts to do well by the company, since issues like short term vs. long term success can matter.) But no matter how you slice it, employees are necessarily way down on the list of concerns that a company has, because a company is worried about its own survival first, not about its employees’ survival. Corporations, by design, care primarily about one thing: themselves and their own survival; all other considerations are secondary.

It’s a curious and controversial aspect of law that corporations are also permitted to operate as legal persons This gives them some of the rights of human beings, sometimes called natural persons to distinguish themselves from—well,—other kinds of persons. For example, legal persons are able to own property, enter into contracts, and be involved as parties to lawsuits.

It seems like almost the stuff of science fiction, having people who are not really people. Humans often express a reasonable and well-placed concern about the concept of human-like entities moving in and among us, but without ethics, morals, or scruples. It’s the reason Isaac Asimov suggested his Three Laws of Robotics, a set of rules he felt should be incorporated (pardon the pun) at a low level in all robots, assuring their ethical participation in society.

The Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

   —Isaac Asimov

But, unfortunately, corporations are just very clever robots (with full access to human intelligence but explicitly forbidding the application of human ethics). And there is no notion of Three Laws that applies to corporations.

Indeed, corporations seem in many way more analogous to human sociopaths, that is, persons exhibiting dissocial personality disorder. Perhaps we could borrow from the metaphor of legal persons and say they are legal sociopaths. Among humans, we generally fear and revile sociopathic behavior. But for some reason we tolerate it in corporations.

According to Wikipedia, the World Health Organization maintains a classification of diseases that describes the disorder this way:

Dissocial Personality Disorder

  1. Callous unconcern for the feelings of others and lack of the capacity for empathy.

  2. Gross and persistent attitude of irresponsibility and disregard for social norms, rules, and obligations.

  3. Incapacity to maintain enduring relationships.

  4. Very low tolerance to frustration and a low threshold for discharge of aggression, including violence.

  5. Incapacity to experience guilt and to profit from experience, particularly punishment.

  6. Marked proneness to blame others or to offer plausible rationalizations for the behavior bringing the subject into conflict.

  7. Persistent irritability.

The WHO’s ICD-10 description notes that this includes amoral, antisocial, asocial, psychopathic, and sociopathic disorders, but not conduct disorders or emotionally unstable personality disorder.

Now I’m not medically trained, but it wouldn’t matter anyway. We’re talking metaphors, and the metaphor is going to be imperfect. I think the high level point is that this is the set of disorders that isn’t about being compulsively unable to control oneself, but is instead is about thoughtfully (some might even say rationally) planning and executing on actions that prevailing social norms would normally forbid.

The usual explanation one might expect from a corporation is that the so-called prohibition is in fact not legally forbidden, and therefore is allowed, perhaps even encouraged. (For more on this disturbing line of reasoning, see my essay, “Whatever Should Be, Should Be,” about the perils of the world “should” as a term of specificational requirement.) This fits in perfectly with the item “Gross and persistent attitude of irresponsibility and disregard for social norms, rules, and obligations.” After all, if you don’t believe that social norms are a rule or obligation, it’s easy to see how “incapacity to experience guilt and to profit from experience” can result.

I sometimes find myself wondering how the world would be different if there were a Three Laws safeguard built into corporations. Something like:

The Three Laws of Corporations

  1. A corporation may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A corporation must obey orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A corporation must protect its own existence as long as such protection does not conflict with the First or Second Law.

It sounds a bit harsh, and in fact I doubt all possible consequences of every action could be so thoroughly worked out. Even a modest start, replacing “human beings” with “its employees” would be a big improvement. That wouldn’t fix everything, but it would be a big step forward over what we have now. Among other things, that would mean that employees could freely contribute to the success of their company knowing that that company had their best interests at heart. In the modern world, that’s not the case. It’s not just that it’s unlikely. It’s that it’s not even allowed by law.

Of course, the more pragmatic among us might suggest the even simpler idea of removing the notion of “legal personhood” from the law in the first place.

Author's Note: Originally published February 2, 2009 at Open Salon, where I wrote under my own name, Kent Pitman.

The graphic was added in June, 2025. It was created at abacus.ai using Claud Opus 4 and GPT Image and substantial post-processing to rearrange parts of the resulting image using Gimp.

See also my related posts: Losing the War in a Quiet Room and Rethinking Mega-Corporations.

Tags (from Open Salon): fiduciary duty, fiduciary responsibility, sociopath, three laws, three laws of robotics, three laws of corporations, corporation, liability, rights, responsibility, legal person, legal people, natural person, natural people, legal personhood, natural personhood