A matter of Life and Death
I've given some thought to the meaning of death as it applies to those who have posted frequently on the internet. We often don't see people's writings in the order that they write them, and that means we can see new posts from them after they die.
Even without the internet this happens. I was in a bookstore recently and saw a book by Michael Crichton and asked the shopkeeper, “Isn't this his third posthumous book?” “Yeah…” he sheepishly responded. Someone is plainly raiding his basement for rejected works and projects that were far enough along that someone else can complete them and claim to have been co-author. His heirs are probably happy for the income, even if the publishing timeline is confusing to some readers.
Perhaps it's even possible for a prolific writer to write so much that readers never really perceive them as dead because they just keep seeing new stuff. So in what sense are they dead? Most readers were perhaps never going to meet them, and so in some sense—of observables—these writers are doing the same things that live ones are.
The elusive nature of intelligence
The big thing dead authors cannot do is the same thing GenAI/LLMs cannot do: competently respond to a new situation, question, or idea.
Oh, sure, the prompt topic might be something someone has speculated on before, so these engines can regurgitate that. Or the topic idea might be enough similar to a previous idea that the probabilities of guessing something acceptable to say based on just assuming it was really just an old idea is high enough that it escapes scrutiny that the topic idea was not really properly understood.
As I imagine—or perhaps just hope?—the makers of standardized tests like the SAT would tell you, there's more to competence than statistically guessing enough right answers to get a passing grade. The intent of such tests is not to say that if you know these things, you know the topic. It is to assume you have a mental model that lets you answer on any possible aspect of that model, and then to poke at enough randomly chosen places that you can hope to detect flaws in the model.
But these so-called AI technologies do not have a mental model. They just hope they've read enough standardized test preparation guides or pirated actual tests that they can fake their way. And since a lot of the things that they're claiming competence in are things that people have already written about, the technology manages to show promise—perhaps more promise than is warranted.
Real people build a mental model that allows them to confront not just the present but the future, while these technologies do no such planning. The models real people make probably hope the future is a lot like today, but people hopefully can't—and anyway shouldn't—get by on bluffing. Not the kind of bluffing today's “AI” tech does. That tech is not growing. It is dead. It has no plan for confronting a new idea other than to willfully ignore the significance of any real newness.
Just like my example of publication and death on the internet, the “AI” game is structured so it takes a long time for weakness to be recognized—unless just the right question is asked. And then, perhaps, the emperor will be seen clearly to have no clothes.
The dynamic nature of ethics
Which is also why it troubles me when I'm told that people are incorporating ethics. It troubles me because ethics itself has to be growing all the time, constantly asking itself, “How might I not be ethical?”
Ethics is not something you do on one day and are done with. Ethics is a continuing process, and one that needs its own models.
Worse, the need for ethics is easily buried under the sophistry of how things have always been done. The reason that bias and stereotypes and all that have survived as long as they have is that they do have practical value to someone, perhaps many people, even as they trod on the just due of others.
The sins of our society are deeply woven, and easily rediscovered even if superficial patches are added to hide them. Our whole culture is a kind of rationalization engine for doing things in biased ways based on stereotype information, and AI is an engine ready to reinforce that, operating at such high speed that it's hard to see happening, and in such volume that it's economically irresistible not to accept as good enough, no matter the risk of harm.
Where we're headed
Today's attempts at “AI” bring us face to face with stark questions about whether being smart is actually all that important, or whether faking it is good enough. And as long as you never put these things in situations where the difference matters, maybe the answer will seem to be that smart, in fact, doesn't matter. But…
There will be times when being smart really does matter, and I think we're teaching ourselves trust in the wrong technologies for those situations.
Author's Notes:
If you got value from this post, please “Share” it.
This post began as a post on Mastodon. It has been edited to correct myriad typos and to clarify and expand various portions in subtle ways. Think of that post as a rough draft.
The graphic uses a lightbulb drawn by abacus.ai's gpt-4o engine with flux.1. The original prompt was “draw a simple black and white image that shows a silhouette of a person thinking up an idea, showing a lightbulb near their head” but then I removed the person from the picture and overlaid the circle and slash ‘by hand’ in Gimp.
No comments:
Post a Comment