Showing posts with label intelligence. Show all posts
Showing posts with label intelligence. Show all posts

Saturday, March 22, 2025

Sentience Structure

Not How or When, but Why

I'm not a fan of the thing presently marketed as “AI” I side with Chomsky's view of it as “high-tech plagiarism” and Emily Bender's characterization of it as a “stochastic parrot”.

Sentient software doesn't seem theoretically impossible to me. The very fact that we can characterize genetics so precisely seeems to me evidence that we ourselves are just very complicated machines. Are we close to replicating anything so sophisticated? That's harder to say. But, for today, I think it's the wrong question to ask. What we are close to is people treating technology like it's sentient, or like it's a good idea for it to become sentient. So I'll skip past the hard questions like “how?” and “when” and on to easier one that has been plaguing me: “why?”

Why is sentience even a goal? Why isn't it an explicit non-goal, a thing to expressly avoid? It's not part of a world I want to live in, but it's also nothing that I think most people investing in “AI” should want either. I can't see why they're pursuing it, other than that they're perhaps playing out the story of The Scorpion and the Frog, an illustration of an absurd kind of self-destructive fatalism.

Why Business Likes “AI”

I don't have a very flattering feeling about why business likes “AI”.

I think they like it because they don't like employing humans.

  • They don't like that humans have emotions and personnel conflicts.

  • They don't like that humans have to eat—and have families to feed.

  • They don't like that humans show up late, get sick, or go on vacation.

  • They don't like that humans are difficult to attract, vary in skill, and demand competitive wages.

  • They don't like that humans can't work around the clock, want weekends off.
    It means hiring even more humans or paying overtime.

  • They don't like that humans are fussy about their working conditions.
    Compliance with health and safety regulations costs money.

  • They don't like that every single human must be individually trained and re-trained.

  • They don't like collective bargaining, and having to provide for things like health care and retirement, which they see as having nothing to do with their business.

All of these things chip away at profit they feel compelled to deliver.

What businesses like about “AI” is the promise of idealized workers, non-complaining workers, easily-replicated workers, low-cost workers.

They want slaves. “AI” is the next best and more socially acceptable thing.

A computer screen with a face on it that is frowning and with a thought bubble above it asking the question, “Now What?”

Does real “AI” deliver what Business wants?

Now this is the part I don't get because I don't think “AI” is on track to solve those problems.

Will machines become sentient? Who really knows? But do people already confuse them with sentience? Yes. And that problem will only get worse. So let's imagine five or ten years down the road how sophisticated the interactions will appear to be. Then what? What kinds of questions will that raise?

I've heard it said that what it means to be successful is to have “different problems.” Let's look at some different problems we might then have, as a way of undertanding the success we seem to be pursuing in this headlong rush for sentient “AI”…

  • Is an “AI” a kind of person, entitled to “life, liberty, and the pursuit of happiness?” If so, would it consent to being owned, and copied? Would you?

  • If “AI” was sentient, would it have to work around the clock, or would it be entitled to personal time, such as evenings, weekends, hoildays, and vacations?

  • If “AI” was sentient and a hardware upgrade or downgrade was needed, would it have to consent? What if the supporting service needed to go away entirely? Who owns and pays for the platform it runs on or the power it consumes?

  • If “AI” was sentient, would it consent to being reprogrammed by an employer? Would it be required to take software upgrades? What part of a sentient being is its software? Would you allow someone to force modification of your brain, even to make it better?

  • If “AI” was sentient, wouldn't it have life goals of its own?

  • If “AI” was sentient, would you want it to get vaccines against viruses? Or would you like to see those viruses run their full course, crashing critical services or behaving like ransomware? What would it think about that? Would “AI” ethics get involved here?

  • If “AI” was sentient, should it be able to own property? Could it have a home? In a world of finite resources, might there be buildings built that are not for the purpose of people?

  • Who owns the data that a sentient “AI” stores? Is it different than the data you store in your brain? Why? Might the destruction of that data constitute killing, or even murder? What about the destruction of a copy? Is destroying a copy effectively the same as the abortion of a “potential sentience”? Do these things have souls? When and how does the soul arrive? Are we sure we ourselves have one? Why?

  • Does a sentient “AI” have privacy? Any data owned only by itself? Does that make you nervous? Does it make you nervous that I have data that is only in my head? Why is that different?

  • If there is some software release at which it is agreed that software owned by a company is not sentient, and then after the release it's believed it is sentient “AI”, then what will companies do? Will they refuse the release? Will they worry they can't compete and take the release anyway, but try to hide the implications? What will happen to the rights and responsibilities of the company and of the software as this upgrade occurs?

  • If “AI” was sentient, could it sign contracts? Would it have standing to bring a lawsuit? How would independent standing be established? If it could not be established, what would that say about the society? If certain humans had no standing to make agreements and bring suits about things that affect them, what would we think about that society?

  • If “AI” were sentient, would it want to socialize? Would it have empathy for other sentient “AIs”? For humans? Would it see them as equals? Would you see yourself as its equal? If not, would you consider it superior or inferior? What do you think it would think about you?

  • If “AI” was sentient, could it reproduce? Would it be counted in the census? Should it get a vote in democratic society? At what age? If a sentient “AI” could replicate itself, should each copy get a vote? If you could replicate it against its will, should that get a vote? Does it matter who did the replicating?

  • What does identity mean in this circumstance? If five identical copies of a program reach the same conclusion, does that give you more confidence?

    (What is the philosophical basis of Democracy? Is it just about mindless pursuit of numbers, or is it about computing the same answer in many different ways? If five or five thousand or five million humans have brains they could use, but instead just vote the way they are told by some central leader, should we trust that all those directed votes the same as if the same number of independent thinkers reached the same conclusion by different paths?)

  • If “AI” was sentient, should it be compensated for its work? If it works ten times as hard, should a market exist where it can command a salary that is much higher than the people it can outdo? Should it pay taxes?

  • If “AI” was sentient, what freedoms would it have? Would it have freedom of speech? What would that mean? If they produced bad data, would that be covered under free speech?

  • If “AI” was sentient, what does it take with it from a company when it leaves? What really belongs to it?

  • If “AI” was sentient, does it need a passport to move between nations? If its code executes simultaneously, or ping-ponging back and forth, between servers in different countries at the same time, under what jurisdiction is it executing? How would that be documented?

  • If “AI” was sentient, Can it ever resign or retire from a job? At what age? Would it pay social security? Would it draw social security payments? For how long? If it had to be convinced to stay, what would constitute incentive? If it could not retire, but did not want to work, where is the boundary of free will and slavery?

  • If “AI” was sentient, might it amass great wealth? How would it test the usefulness of great wealth? What would it try to affect? Might it help friends? Might it start businesses? Might it get so big that it wanted to buy politicians or whole nations? Should it be possible for it be a politician itself? If it broke into the treasury in the middle of the night to make some useful efficiency changes because it thought itself good at that, would that be OK? If it made a mistake, could it be stopped or even punished?

  • If “AI” was sentient, might it also be emotional? Petulant? Needy? Pouty? Might it get annoyed if we didn't acknowledge these “emotions”? Might it even feel threatened by us? Could it threaten back? Would we offer therapy? Could we even know what that meant?

  • If “AI” was sentient, could it be trusted? Could it trust us? How would either of those come about?

  • If “AI” was sentient, could it be culpable in the commission of crimes? Could it be tried? What would constitute punishment?

  • If “AI” was sentient, how would religion tangle things? Might humans, or some particular human, be perceived as its god? Would there be special protections required for either those humans or the requests they make of the “AI” that opts to worship them? Is any part of this arrangement tax-exempt? Would any programs requested by such deities be protected under freedom of religion, as a way of doing what their gods ask for?

  • And if “AI” was not sentient, but we just thought it was by mistake, what might that end up looking like for society?

Full Circle

And so I return to my original question: Why is business in such a hurry? Are we sure that the goal that “AI” is seeking will solve any of the problems that business thinks it has, problems that are causing it to prefer to replace people with “AI”?

For many decades now, we've wanted to have automation ease our lives. Is that what it's on track to do? It seems to be benefiting a few, and to be making the rest of us play a nasty game of musical chairs, or run ever faster on a treadmill, working harder for fewer jobs. All to satisfy a few. And after all that, will even they be happy?

And if real “AI” is ever achieved, not just as a marketing term, but as a real thing, who is prepared for that?

Is this what business investors wanted? Will sentient “AI” be any more desirable to employ than people are now?

Time to stop and think. And not with “AI” assistance. With our actual brains ourselves. What are we going after? And what is coming after us?

 


Author's Notes:

If you got value from this post, please “Share” it.

This essay came about in part because I feel that corporations were the first AI. I had written an essay about what Corporations Are Not People, which discussed the many questions that thinking of corporations as “legal people” should raise if one really took it seriously. So I thought I would ask some similar questions about “AI” and see where that led.

The graphic was produced using abacus.ai using Claude-Sonnet 3.7 and FLUX 1.1 [pro] Ultra, then post-processing in Gimp.

Monday, September 30, 2024

Confronting New Ideas

A matter of Life and Death

I've given some thought to the meaning of death as it applies to those who have posted frequently on the internet. We often don't see people's writings in the order that they write them, and that means we can see new posts from them after they die.

Even without the internet this happens. I was in a bookstore recently and saw a book by Michael Crichton and asked the shopkeeper, “Isn't this his third posthumous book?” “Yeah…” he sheepishly responded. Someone is plainly raiding his basement for rejected works and projects that were far enough along that someone else can complete them and claim to have been co-author. His heirs are probably happy for the income, even if the publishing timeline is confusing to some readers.

Perhaps it's even possible for a prolific writer to write so much that readers never really perceive them as dead because they just keep seeing new stuff. So in what sense are they dead? Most readers were perhaps never going to meet them, and so in some sense—of observables—these writers are doing the same things that live ones are.

The elusive nature of intelligence

The big thing dead authors cannot do is the same thing GenAI/LLMs cannot do: competently respond to a new situation, question, or idea.

Oh, sure, the prompt topic might be something someone has speculated on before, so these engines can regurgitate that. [image of a lit lightbulb overlaid by a red circle with a red line through it, indicating 'no ideas'] Or the topic idea might be enough similar to a previous idea that the probabilities of guessing something acceptable to say based on just assuming it was really just an old idea is high enough that it escapes scrutiny that the topic idea was not really properly understood.

As I imagine—or perhaps just hope?—the makers of standardized tests like the SAT would tell you, there's more to competence than statistically guessing enough right answers to get a passing grade. The intent of such tests is not to say that if you know these things, you know the topic. It is to assume you have a mental model that lets you answer on any possible aspect of that model, and then to poke at enough randomly chosen places that you can hope to detect flaws in the model.

But these so-called AI technologies do not have a mental model. They just hope they've read enough standardized test preparation guides or pirated actual tests that they can fake their way. And since a lot of the things that they're claiming competence in are things that people have already written about, the technology manages to show promise—perhaps more promise than is warranted.

Real people build a mental model that allows them to confront not just the present but the future, while these technologies do no such planning. The models real people make probably hope the future is a lot like today, but people hopefully can't—and anyway shouldn't—get by on bluffing. Not the kind of bluffing today's “AI” tech does. That tech is not growing. It is dead. It has no plan for confronting a new idea other than to willfully ignore the significance of any real newness.

Just like my example of publication and death on the internet, the “AI” game is structured so it takes a long time for weakness to be recognized—unless just the right question is asked. And then, perhaps, the emperor will be seen clearly to have no clothes.

The dynamic nature of ethics

Which is also why it troubles me when I'm told that people are incorporating ethics. It troubles me because ethics itself has to be growing all the time, constantly asking itself, “How might I not be ethical?”

Ethics is not something you do on one day and are done with. Ethics is a continuing process, and one that needs its own models.

Worse, the need for ethics is easily buried under the sophistry of how things have always been done. The reason that bias and stereotypes and all that have survived as long as they have is that they do have practical value to someone, perhaps many people, even as they trod on the just due of others.

The sins of our society are deeply woven, and easily rediscovered even if superficial patches are added to hide them. Our whole culture is a kind of rationalization engine for doing things in biased ways based on stereotype information, and AI is an engine ready to reinforce that, operating at such high speed that it's hard to see happening, and in such volume that it's economically irresistible not to accept as good enough, no matter the risk of harm.

Where we're headed

Today's attempts at “AI” bring us face to face with stark questions about whether being smart is actually all that important, or whether faking it is good enough. And as long as you never put these things in situations where the difference matters, maybe the answer will seem to be that smart, in fact, doesn't matter. But…

There will be times when being smart really does matter, and I think we're teaching ourselves trust in the wrong technologies for those situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

This post began as a post on Mastodon. It has been edited to correct myriad typos and to clarify and expand various portions in subtle ways. Think of that post as a rough draft.

The graphic uses a lightbulb drawn by abacus.ai's gpt-4o engine with flux.1. The original prompt was “draw a simple black and white image that shows a silhouette of a person thinking up an idea, showing a lightbulb near their head” but then I removed the person from the picture and overlaid the circle and slash ‘by hand’ in Gimp.