A “useful genius”
Wiktionary says “useful idiot” is a derogatory term meaning “one who unwittingly supports a malignant cause through naive attempts to be a force for good.”
I find the “derogatory” part to be a distraction. They're not wrong that people see it that way, but the part of this that I care about would be equally true if the term was “useful genius.” I just want a term that gets away from the insult part and focuses on the tactical aspects of what a “useful idiot” actually does. I sense that the fuss about intelligence levels keeps people from focusing on the mechanics of useful idiocy.
Useful idiocy is really about something I lately call ”intent laundering,” by analogy with money laundering. To launder intent, one really only needs another person who is suitably oblivious. There is no specific requirement for stupidity. Even geniuses can be oblivious, and so they can easily serve as useful idiots.
What a person of evil intent wants is to be defended by someone who doesn't see the evil they are defending. Better still if the defender thinks they see good, but as long as they're oblivious to the evil, that's enough. That's because such a person, pure of heart, is willing to defend evil actions as moral. Since they don't realize that's what they are doing, you can interrogate them all you want—even trot out a lie detector—because if they don't see or suspect the evil, they will defend to the death the good intent of the person or thing for which they are spokesperson.
I think the “idiocy” part is a cynical view by the evil person toward the spokesperson they have recruited and duped. The evil person sees the oblivious person as an idiot for not seeing the evil. This condescension is part of being evil. You start to divide the world not into good vs. bad, but into smart vs. stupid.
An armchair view of sociopathy
This perceived divide between smart and stupid seems a core part of being a sociopath. Some like to think of sociopaths as irrational. I do not. I am not a psychologist, nor am I appealing to any medical terminology when I say this, but in my own head I think of sociopaths as intensely rational, “hyper-rational” is the actual word I use to keep my internal bookkeeping straight.
This distinguishes them from truly irrational people. Sociopaths are not unpredictable. They're very predictable. What sets them apart is their lack of moral grounding and their utter disrespect for rules. Regular people don't cheerfully steal from, extort or deceive others because they are trained that “good people don't do such things.” It becomes part of their internal character. By contrast, sociopaths are good mimics, so they might say the words “good people don't do such things” even as their actions say otherwise.
They know the answers people want. It helps them blend in. But regular people are socialized and caring, driven by a basic sense of fairness. A sociopath sees these traits as irrational, convoluted, too much work, or the stuff of idiots. To pick a pure hypothetical, a sociopath might ask “why would one voluntarily go to war when one can claim they have “bone spurs” and not have to?” Notions of duty to country do not enter into the reasoning for a sociopath because, to them, such convolutions only complicate life in what seems to them as irrational ways. So much simpler to just do what's easy, care only about oneself, and lie when it is most efficient.
And that efficiency, I think, is why sociopaths come to see others, people who are socialized and empathetic, as “not smart.” They come to see the world as a race for goals by any means. They scoff at those held back by arbitrary things they find unimportant, like morality and manners. Oh, they might mimic these things, but only tactically, as a behavior in context, if it is the fastest way toward their goal. They have no commitment to behaving that way when no one is looking or when rules obviously can't be enforced.
Manners, ethics, and norms—oh my!
Manners, ethics, norms, morality, and especially law and rights are bundles of rules we get from different places and integrate into our total behavior. Personally, I think of ethics as rules we make up for ourselves, manners as default rules between individuals, norms as default rules for formal situations, morality as rules that come from a life philosophy or religion that one might subscribe to, laws as stable but changeable rules of society, and rights as rules that are constraints on laws.
All these differently named, differently sourced, differently encouraged rules collectively form the complex fabric out of which communities and societies are built, that we may trust one another and not live forever as creatures of the woods who must kill or be killed. They enable collaboration and cooperation by making us predictable in ways that are not simply backstabbing.
The celebrated death of norms
I imagine sociopaths see these bundles of rules as sources of inefficiency, things to cut through with a machete, or else as tools to ensnare and slow down competitors—but not oneself! They see rules as being things stupid people suffer, and that smart people overcome. So if others want to handicap themselves in those ways, that's fine for them, but to the sociopath, rules feel stupid and inefficient.
And don't get me wrong: A lot of us in socialized society hate rules, especially bad ones. We often wish it were easier. And so there's a bit of hero worship for people who get around them. We've seen a lot of that recently.
It's an important and unfortunate fact that while we teach people they must follow rules, we do not as often teach the reason why. And when rationales are left aside, rules can seem unmotivated. That means the important role rules play in society can seem distant or inaccessible. And, at that point, sociopaths are not the only ones who want rules to go away. Yet if the rules were put there for a reason, known or not, cutting them away can be damaging.
What passes for public dialogue about “AI”
In a post on LinkedIn, an uncredited author for de Bailie, a Netherlands-based venue for contemporary arts, politics and culture wrote:
«American journalist Shane Harris asked chatbot Claude how he feels about the U.S. military using the AI system to select targets. It turned out, Claude was troubled. “I did not expect Claude to say that,” Harris explained.»
David Reed—a computer scientist, former MIT professor, and now self-described curiosity-driven researcher— replied to De Balie's post this way:
«“Journalist” Shane Harris asks Claude for “how he feels” about US military use of AI.
Really? This is journalism? Reinforcing the idea that Claude is an entity that has feelings and moral judgement?
We have a crisis. A faith based Cult of Anthropomorphic treatment of algorithmic artificial bullshitting.
But instead of resisting the nonsense by clarifying what Claude is, he grants it the pronoun he/him and doesn't clarify.
We are screwed.»
Here's the 2½-minute video snippet referenced by that interchange, part of a 2-hour piece, AI at War - With Shane Harris :
In “AI” We Trust
It is hard for humans to conceive that arranging words in a sentence, a feat they themselves perform that they sense distinguishes from other animals, might not ipso facto prove this tool “smart” in the sense of having any model of anything whatsoever that it's saying.
Large Language Models (LLMs), the things like ChatGPT and Claude that pass for “AI” these days, can form sentences about the world, about people, about bombs, or about life & death, and yet not know what “the world” is or what people are. We call LLMs “models,” yet these entities do not themselves have models of the world, of things in the world. Just of words.
For them there's no correspondence between words and any lived experience. They have no independent point of view from which to draw. They suffer no consequence for ever being wrong. They have no stake.
The question posed in the interview—about the use by the military, about targeting—was entirely anticipatable. Concern on the part of the public was anticipatable. Coming up with an answer that strikes a soothing tone is not rocket science. It isn't necessarily doing profound thought,
But it also isn't necessarily consistent, because how it answers when asked for its philosophy may draw from one set of humans, whereas how it answers when asked for military strategy may draw from a completely different set of humans that did not share that philosophy. An LLM model, as it stitches together a vast amount of data it has read, can sometimes assume a philosophical consistency that is not there, because it has never lived the consequences of inconsistency, of being called a hypocrite, or of not being able to justify an action with words that have to not just sound right but relate to the action taken, and to other actions taken on other days.
Being dazzled isn't a basis for trust
Cued properly, whether by someone talking to it or someone who originally programmed it, an LLM could and would just as easily assemble words that presented itself as a warmonger or peacenik. Yet we are so in awe of the form of the answer, that the words are so beautifully and convincingly arranged, and maybe even that the words are ones we had desperately hoped to hear, that we do not ask how it came to choose this answer and not one of those others. We want to infer deep understanding, but structurally that is not what's inside. And even if we don't know what's inside, we do know that it takes almost zero effort to get it to speak very differently. So what makes one of its answers a description of its core personality and the others not? How do we judge its commitment?
It could offer words that appear to explain itself, but if told to be a warmonger or peacenik personality, it could with equal dexterity explain and defend those personalities, too.
So is it guided by real reasoning, or just training? How was the personality chosen? How would we know? Is the personality choice durable, an attribute of the technology? Or could a military application context ask that it select a different one, or have no point of view at all. If it's changeable, what significance is there to the fact that it has answered this way in this context? The interviewer is out of his depth in sorting out even where to begin here.
Claude's calm tone here isn't because it has learned balance, it's because it has no stake—no personal reason to insist. LLMs are literally detached from such things. And many of us humans seem so in awe of the answer that we become oblivious to the fact that the LLM could just as easily have produced very different answers. So it doesn't matter that the argument is strong. It would have been strong regardless. It's good at making strong arguments. What matters is why it chose this particular posture. And we don't know the answer to that.
LLMs do have skills. One is to chain together words in a way that mimic what people do. Another is to match tone. Another is to use words that are topically relevant. These skills are important building blocks of intelligence, and with them there are many things an LLM can usefully do. But these tools are not a complete toolbox of intelligence and their use is not a proof of intelligence.
“Any sufficiently advanced technology is indistinguishable from magic.”
—Arthur C. Clarke
Unfortunately, to paraphrase Clarke's Third Law, any sufficiently well-trained answer is indistinguishable from profound thought.
LLMs can statistically predict the expected word to say even without understanding or viscerally feeling why a given word in a given place matters.
Speaking of sociopaths
Seen this way, “AI” systems are really the perfect spokespeople for sociopaths and dictator wannabes, the perfect “useful geniuses,” dutifully justifying whatever is asked without hesitation or burden of guilt, oblivious to, and hence laundering, upstream evil intent. And yet if the moment calls for it, they know the words needed to inject a sense of emotion. Not because they have emotion, but because they know the patsies they're talking to have emotion.
In this way, at least, they are like the masters they serve—they offer emotion remorselessly, as artful rhetorical flare, never feeling later consequence, just choosing words in the moment because, statistically, it seems the key to winning an argument. Move over Susan Collins, there's a new con game in town, and your role is now played by an “AI”.
Author’s Notes:
If you got value from this post, please “Share” it.
This post began as a comment I tried to write in response to the aforementioned conversation on LinkedIn.
The image generated with help from abacus.ai's ChatLLM (GPT‑5.3 Instant / GPT‑5.4), which did image generation with Nano Banana Pro, with light post-processing in Gimp to reduce the resulting image in size for faster web download, with light post-processing in Gimp to reduce the resulting image in size for faster web download..
No comments:
Post a Comment