Showing posts with label ethics. Show all posts
Showing posts with label ethics. Show all posts

Sunday, May 18, 2025

Unsupervised AI Children

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Recent “AI” hype

Since the introduction of the Large Language Model (LLM), the pace of new tools and technologies has been breathtaking. Those who are not producing such tech are scrambling to figure out how to use it. Literally every day there's something new.

Against this backdrop, Google has recently announced a technology it calls AlphaEvolve, which it summarizes as “a Gemini-powered coding agent for designing advanced algorithms” According to one of its marketing pages:

“Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.»

Early Analysis

The effects of such new technologies are hard to predict, but let's start what's already been written.

In an article in ars technica, tech reporter Ryan Whitwam says of the tech:

«When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.»

It's interesting to note that I found this commentary by Whitwam from AlphaEvolve's Wikipedia page, which had already re-summarized what he said as this (bold mine to establish a specific focus):

«its architecture allows it to evaluate code programmatically, reducing reliance on human input and mitigating risks such as hallucinations common in standard LLM outputs.»

Whitwam actually hadn't actually said “mitigating risks,” though he may have meant it. His more precise language, “improving accuracy” speaks to a much narrower goal of specific optimization of modeled algorithms, and not to the broader area of risk. These might seem the same, but I don't think they are.

To me—and I'm not a formal expert, just someone who's spent a lifetime thinking about computer tech ethics informally—risk modeling has to include a lot of other things, but most specifically questions of how well the chosen model really captures the real problem to be solved. LLMs give the stagecraft illusion of speaking fluidly about the world itself in natural language terms, and that creates all kinds of risks of simple misunderstanding between people because of the chosen language, as well as failures to capture all parts of the world in the model.

Old ideas dressed up in a new suit

In a post about this tech on LinkedIn, my very thoughtful and rigorously meticulous friend David Reed writes:

«30 years ago, there was a craze in computing about Evolutionary Algorithms. That is, codes that were generated by random modification of the source code structure and tested against an “environment” which was a validation test. It was a heuristic search over source code variations against a “quality” or “performance” measure. Nothing new here at all, IMO, except it is called “AI” now.»

I admit haven't looked at the tech in detail, but I trust Reed's assertion that the current interation of the tech is primarily less grandiose than Google's hype suggests—at least for now.

But that doesn't mean more isn't coming. And by more, I don't necessarily mean smarter. But I do mean that it will be irresistible for technologists to turn this tech upon itself and try exactly what Google sounds like it's wanting to claim here: that unsupervised evolutionary learning will soon mean “AI”—in the ‘person’ of LLMs—can think and evolve on their own.

Personally, I'm confused by why people even see it as a good goal, as I discussed in my essay Sentience Structure. You can read that essay if you want the detail, so I won't belabor that point here. I guess it comes down to some combination of a kind of euphoria that some people have over just doing something new combined with a serious commercial pressure to be the one who invents the next killer app.

I just hope it's not literally that—an app that's a killer.

Bootstrapping analysis by analogy

In areas of new thought, I reason by analogy to situations of similar structure in order to derive some sense of what to expect, by observing what happens in analogy space and then projecting back into the real world to what might happen with the analogously situated artifacts. Coincidentally, it's a technique I learned from a paper (MIT AIM-520) written by Pat Winston, head of the MIT AI lab back when I was studying and working there long ago — when what we called “AI” was something different entirely.

Survey of potential analogy spaces

Capitalism

I see capitalism as an optimization engine. But any optimization engine requires boundary conditions in order to not crank out nonsensical solutions. Optimization engines are not "smart" but they do a thing that can be a useful tool in achieving smart behavior.

Adam Smith, who some call the father of modern capitalism, suggested that if you want morality in capitalism, you must encode it in law, that the engine of capitalism will not find it on its own. He predicted that absent such encoding, capitalists would tend toward being tyrants.

Raising Children

Children are much smarter than some people give them credit for. We sometimes think of kids getting smarter with age or education, but really they gain knowledge and context and, eventually, we hope, empathy. Young children can do brilliant but horrifying things, things that might hurt themselves or others, things we might call sociopathic in adults, for lack of understanding of context and consequence. We try to watch over them as they grow up, helping them grow out of this.

It's why we try kids differently than adults sometimes in court. They may fail to understand the consequences of their actions.

Presuppositions

We in the general public, the existing and future customers of “AI” are being trained by use of tools like ChatGPT to think of an “AI” as something civil because the conversations we have with them are civil. But with this new tech, all bets are off. It's just going to want to find a shorter path to the goal.

LLM technology has no model of the world at all. It is able to parrot things, to summarize things, to recombine and reformat things, and a few other interesting tricks that combine to give some truly dazzling effects. But it does not know things. Still, for this discussion, let's even suspend disbelief and assume that there is some degree of modeling going on in this new chapter of “AI” if the system thinks it can improve its score.

Raising “AI” Children

Capitalism is an example of something that vaguely models the world by assigning dollar values to a great many things. But many find ourselves routinely frustrated by capitalism because it seems to behave sociopathically. Capitalists want to keep mining oil when it's clear that it is going to drive our species extinct, for example. But it's profitable. In other words, the model says this is a better score because the model is monetary. It doesn't measure safety, happiness (or cruelty), sustainability, or a host of other factors unless a dollar score is put on those. The outcome is brutal.

My 2009 essay Fiduciary Duty vs The Three Laws of Robotics discusses in detail why this behavior by corporations is not accidental. But the essence of it is that businesses do the same thing that sociopaths do: they operate without empathy, focusing single-mindedly on themselves and their profit. In people, we call that sociopathy. Since corporations are sometimes called “legal people,” I make the case in the essay that corporations are also “legal sociopaths.”

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Young children growing up tend to be very self-focused, too. They can be cruel to one another in play, and grownups need to watch over them to make sure that appropriate boundaries are placed on them. A sense of ethics and personal responsibility does not come overnight, but a huge amount of energy goes into supervising kids before turning them loose on the world.

And so we come to AIs. There is no reason to suspect that they will perform any differently. They need these boundary conditions, these rules of manners and ethics, a sense of personal stake in the world, a sense of relation to others, a reason not to behave cruelly to people. The plan I'm hearing described, however, falls short of that. And that scares me.

I imagine they think this can come later. But this is part of the dance I have come to refer to as Technology's Ethical Two-Step. It has two parts. In the first part, ethics is seen as premature and gets delayed. In the second part, ethics is seen as too late to add retroactively. Some nations have done better than others at regulating emerging technology. The US is not a good example of that. Ethics is something that's seen as spoiling people's fun. Sadly, though, an absence of ethics can spoil more than that.

Intelligence vs Empathy

More intelligence does not imply more empathy. It doesn't even imply empathy at all.

Empathy is something you're wired for, or that you're taught. But “AI” is not wired for it and not taught it. As Adam Smith warned, we must build it in. We should not expect it to be discovered. We need to require it in law and then productively enforce that law, or we should not give it the benefit of the doubt.

Intelligence without empathy ends up just being oblivious, callous, cruel, sociopathic, evil. We need to build “AI” differently, or we need to be far more nervous and defensive about what we expect “AI” that is a product of self-directed learning to do.

Unsupervised AI Children—what could possibly go wrong?

The “AI” tech we are making right now are children, and the suggestion we're now seeing is that they be left unsupervised. That doesn't work for kids, but at least we don't give kids control of our critical systems. The urgency here is far greater because of the accelerated way that these things are finding themselves in mission-critical situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

You may also enjoy these other essays by me on related topics:

The graphic was created at abacus.ai using RouteLLM (which referred me to GPT-4.1) and rendered by GPT Image. I did post-processing in Gimp to add color and adjust brightness in places.

Friday, May 16, 2025

Must We Pretend?

An article at countercurrents.org said this recently:

«A new study has warned that if global temperatures rise more than 1.5°C, significant crop diversity could be lost in many regions»
Global Warming and Food Security: The Impact on Crop Diversity

Are we not sufficiently at the 1.5°C mark that this dance in reporting is ludicrous?

I'm starting to perceive the weather/climate distinction less as a matter of scientific certainty and more as an excuse to delay action for a long time. Here that distinction seems to be actively working against the cause of human survival by delaying what seems a truly obvious conclusion, and in doing so giving cover to inaction.

We already have a many year trend that shows things getting pretty steadily worse year over year, with not much backsliding, so it's not like we realistically have to wait 10 years to see if this surpassing 1.5°C is going to magically go away on its own. Indeed, by the time we get that much confirmation, these effects we fear will have seriously clubbed us over the head for too long.

«“The top ten hottest years on record have happened in the last ten years, including 2024,” António Guterres said in his New Year message, stressing that humanity has “no time to lose.”»
2024, Hottest Year on Record, Marks ‘Decade of Deadly Heat’

I keep seeing reports (several quoted by me here below) that we averaged above that in 2024, A haiku, in the ornate Papyrus font, that reads:

«sure, 1.5's bad
but we only just got there
wake me in ten years»

Below the haiku, in a smaller, more gray font, is added:

© 2025 Kent M Pitman so I find this predication on a pipe dream highly misleading.

Even just wordings suggesting that the crossing of some discrete boundary will trigger an effect, but that not crossing it will not, is misleading. It's not like 1.49°C will leave us with no loss of diversity, but 1.51°C will hit us with all these effects.

What needs to be said more plainly is this:

Significant crop diversity is being ever more lost in real time now, and this loss is a result of global average temperatures that are dangerous and getting moreso. That they are a specific value on an instantaneous or rolling average basis gives credibility and texture to this qualitative claim, but no comfort should be drawn from almost-ness nor from theoretical clains that action could yet pull us back from a precipice that there is not similarly substantiated qualitative reason to believe we are politically poised to make.

Science reporting does this kind of thing a lot. Someone will get funding to test whether humans need air to breathe but some accident of how the experiments are set up will find that only pregnant women under 30 were available for testing so the report will be a very specific about that and news reports will end up saying "new report proves pregnant women under 30 need air to breathe", which doesn't really tell the public the thing that the study really meant to report. Climate reporting is full of similarly overly specific claims that allow the public to dismiss the significance of what's really going on. People writing scientific reports need to be conscious of the fact that the reporting will be done in that way and that public inaction will be a direct result of such narrow reporting.

In the three reports that I quote below, the Berkeley report at least takes the time to say "recent warming trends and the lack of adequate mitigation measures make it clear that the 1.5 °C goal will not be met." We need more plain wordings like this, and even this needs to have been more prominently placed.

There is a conspiracy, intentional or not, between the writers of reports and the writers of articles. The article writer wants to quote the report, but the report wants to say something that has such technical accuracy that it will be misleading when quoted by someone writing articles. Some may say it's not an active conspiracy, just a negative synergy, but the effect is the same. Each party acts as if it is being conservative and careful, but the foreseeable combination of the two parts is anything but conservative or careful.

References
(bold added here for emphasis)

«The global annual average for 2024 in our dataset is estimated as 1.62 ± 0.06 °C (2.91 ± 0.11 °F) above the average during the period 1850 to 1900, which is traditionally used a reference for the pre-industrial period. […] A goal of keeping global warming to no more than 1.5 °C (2.7 °F) above pre-industrial has been an intense focus of international attention. This goal is defined based on multi-decadal averages, and so a single year above 1.5 °C (2.7 °F) does not directly constitute a failure. However, recent warming trends and the lack of adequate mitigation measures make it clear that the 1.5 °C goal will not be met. The long-term average of global temperature is likely to effectively cross the 1.5 °C (2.7 °F) threshold in the next 5-10 years. While the 1.5 °C goal will not be met, urgent action is still needed to limit man-made climate change.»
Global Temperature Report for 2024 (Berkeley Earth)

«The global average surface temperature was 1.55 °C (with a margin of uncertainty of ± 0.13 °C) above the 1850-1900 average, according to WMO’s consolidated analysis of the six datasets. This means that we have likely just experienced the first calendar year with a global mean temperature of more than 1.5°C above the 1850-1900 average.»
WMO confirms 2024 as warmest year on record at about 1.55°C above pre-industrial level

«NASA scientists further estimate Earth in 2024 was about 2.65 degrees Fahrenheit (1.47 degrees Celsius) warmer than the mid-19th century average (1850-1900). For more than half of 2024, average temperatures were more than 1.5 degrees Celsius above the baseline, and the annual average, with mathematical uncertainties, may have exceeded the level for the first time.»
Temperatures Rising: NASA Confirms 2024 Warmest Year on Record

Author's Notes:

If you got value from this post, please “Share” it.

This grew out of an essay I posted at Mastodon, and a haiku (senryu) that I later wrote as a way to distill out some key points.

Sunday, May 4, 2025

AI Users Bill of Rights

[A person sitting comfortably in an easy chair, protected by a force field that is holding numerous helpful robots from delivering food and other services.]

We are surrounded by too much helpful AI trying to insinuate itself into our lives. I would like the option of leaving “AI” tech turned off and invisible, though that's getting harder and harder.

I've drafted a draft version 1 of a bill of rights for humans who want the option to stay in control. Text in green is not part of the proposal. It is instead rationale or other metadata.

AI Users Bill of Rights
DRAFT, Version 1

  1. All use of “AI” features must be opt-in. No operating system or application may be delivered with “AI” defaultly enabled. Users must be allowed to select the option if they want it, but not penalized if they do not.

    Rationale:

    1. Part of human dignity is being allowed freedom of choice. An opt-out system is paternalistic.
    2. Some “AI” systems are not privacy friendly. If such systems are on by default until disabled, the privacy damage may be done by the time of opt-out.
    3. If the system is on by default, it's possible to claim that everyone has at least tried it and hence to over-hype the size of a user base, even to the point of fraudulently claiming users that are not real users.
  2. Enabling an “AI” requires a confirmation step. The options must be a simple “yes” or “no”.

    Rationale:

    1. It's easy to hit a button by accident that one does not understand, or to typo a command sequence. Asking explicitly means no user ends up in this new mode without realizing what has happened.
    2. It follows that the “no” may not be something like “not now” or any other variation that might seem to invite later system-initiated inquiry. Answering “no” should put the system or application back into the state of awaiting a user-initiated request.
  3. Giving permission to use an AI is not the same as giving permission to share the conversation or use it as training data. Each of these requires separate, affirmative, opt-in permissions.

    Rationale:

    1. If the metaphor is one of a private conversation among friends, one is entitled to exactly that—privacy and behavior on the part of the other party that is not exploitative.
    2. Not all “AI” agents in fact do violate privacy. By making these approvals explicit, there is a user-facing reminder for the ones that are more extractive that more use will be made of data than one may want.
  4. All buttons or command-sequences to enable “AI” must themselve be possible to disable or remove.

    Rationale:

    1. It may be possible for someone to enable “AI” without realizing it.
    2. It is too easy to enable “AI” as a typo. Providers of “AI” might even be tempted to place controls in places that encourage such typos.
  5. No application or system may put “AI” on the path to basic functionality. This is intended to be a layer above functionality that allows easier access to functionality in order to automate or speed up certain functions that might be slow or tedious to do manually.

    Rationale:

    1. Building this in to the basic functionality makes it hard to remove.
    2. Integrating it with basic functionality makes the basic functionality hard to test.
    3. If an “AI” is running erratically, it should be possible to isolate it for the purposes of debugging or testing.
    4. When analyzing situations forensically, this allows crisper attribution of blame.

With this, I hope those of us who choose to live in the ordinary human way, holding “AI” at bay, can do so comfortably.

 


Author's Notes:

If you got value from this post, please “Share” it.

The graphic was created at Abacus.ai using Claude Sonnet 3.7 and Flux 1.1 Ultra Pro, then cropped and scaled using Gimp.

Saturday, March 22, 2025

Sentience Structure

Not How or When, but Why

I'm not a fan of the thing presently marketed as “AI” I side with Chomsky's view of it as “high-tech plagiarism” and Emily Bender's characterization of it as a “stochastic parrot”.

Sentient software doesn't seem theoretically impossible to me. The very fact that we can characterize genetics so precisely seeems to me evidence that we ourselves are just very complicated machines. Are we close to replicating anything so sophisticated? That's harder to say. But, for today, I think it's the wrong question to ask. What we are close to is people treating technology like it's sentient, or like it's a good idea for it to become sentient. So I'll skip past the hard questions like “how?” and “when” and on to easier one that has been plaguing me: “why?”

Why is sentience even a goal? Why isn't it an explicit non-goal, a thing to expressly avoid? It's not part of a world I want to live in, but it's also nothing that I think most people investing in “AI” should want either. I can't see why they're pursuing it, other than that they're perhaps playing out the story of The Scorpion and the Frog, an illustration of an absurd kind of self-destructive fatalism.

Why Business Likes “AI”

I don't have a very flattering feeling about why business likes “AI”.

I think they like it because they don't like employing humans.

  • They don't like that humans have emotions and personnel conflicts.

  • They don't like that humans have to eat—and have families to feed.

  • They don't like that humans show up late, get sick, or go on vacation.

  • They don't like that humans are difficult to attract, vary in skill, and demand competitive wages.

  • They don't like that humans can't work around the clock, want weekends off.
    It means hiring even more humans or paying overtime.

  • They don't like that humans are fussy about their working conditions.
    Compliance with health and safety regulations costs money.

  • They don't like that every single human must be individually trained and re-trained.

  • They don't like collective bargaining, and having to provide for things like health care and retirement, which they see as having nothing to do with their business.

All of these things chip away at profit they feel compelled to deliver.

What businesses like about “AI” is the promise of idealized workers, non-complaining workers, easily-replicated workers, low-cost workers.

They want slaves. “AI” is the next best and more socially acceptable thing.

A computer screen with a face on it that is frowning and with a thought bubble above it asking the question, “Now What?”

Does real “AI” deliver what Business wants?

Now this is the part I don't get because I don't think “AI” is on track to solve those problems.

Will machines become sentient? Who really knows? But do people already confuse them with sentience? Yes. And that problem will only get worse. So let's imagine five or ten years down the road how sophisticated the interactions will appear to be. Then what? What kinds of questions will that raise?

I've heard it said that what it means to be successful is to have “different problems.” Let's look at some different problems we might then have, as a way of undertanding the success we seem to be pursuing in this headlong rush for sentient “AI”…

  • Is an “AI” a kind of person, entitled to “life, liberty, and the pursuit of happiness?” If so, would it consent to being owned, and copied? Would you?

  • If “AI” was sentient, would it have to work around the clock, or would it be entitled to personal time, such as evenings, weekends, hoildays, and vacations?

  • If “AI” was sentient and a hardware upgrade or downgrade was needed, would it have to consent? What if the supporting service needed to go away entirely? Who owns and pays for the platform it runs on or the power it consumes?

  • If “AI” was sentient, would it consent to being reprogrammed by an employer? Would it be required to take software upgrades? What part of a sentient being is its software? Would you allow someone to force modification of your brain, even to make it better?

  • If “AI” was sentient, wouldn't it have life goals of its own?

  • If “AI” was sentient, would you want it to get vaccines against viruses? Or would you like to see those viruses run their full course, crashing critical services or behaving like ransomware? What would it think about that? Would “AI” ethics get involved here?

  • If “AI” was sentient, should it be able to own property? Could it have a home? In a world of finite resources, might there be buildings built that are not for the purpose of people?

  • Who owns the data that a sentient “AI” stores? Is it different than the data you store in your brain? Why? Might the destruction of that data constitute killing, or even murder? What about the destruction of a copy? Is destroying a copy effectively the same as the abortion of a “potential sentience”? Do these things have souls? When and how does the soul arrive? Are we sure we ourselves have one? Why?

  • Does a sentient “AI” have privacy? Any data owned only by itself? Does that make you nervous? Does it make you nervous that I have data that is only in my head? Why is that different?

  • If there is some software release at which it is agreed that software owned by a company is not sentient, and then after the release it's believed it is sentient “AI”, then what will companies do? Will they refuse the release? Will they worry they can't compete and take the release anyway, but try to hide the implications? What will happen to the rights and responsibilities of the company and of the software as this upgrade occurs?

  • If “AI” was sentient, could it sign contracts? Would it have standing to bring a lawsuit? How would independent standing be established? If it could not be established, what would that say about the society? If certain humans had no standing to make agreements and bring suits about things that affect them, what would we think about that society?

  • If “AI” were sentient, would it want to socialize? Would it have empathy for other sentient “AIs”? For humans? Would it see them as equals? Would you see yourself as its equal? If not, would you consider it superior or inferior? What do you think it would think about you?

  • If “AI” was sentient, could it reproduce? Would it be counted in the census? Should it get a vote in democratic society? At what age? If a sentient “AI” could replicate itself, should each copy get a vote? If you could replicate it against its will, should that get a vote? Does it matter who did the replicating?

  • What does identity mean in this circumstance? If five identical copies of a program reach the same conclusion, does that give you more confidence?

    (What is the philosophical basis of Democracy? Is it just about mindless pursuit of numbers, or is it about computing the same answer in many different ways? If five or five thousand or five million humans have brains they could use, but instead just vote the way they are told by some central leader, should we trust that all those directed votes the same as if the same number of independent thinkers reached the same conclusion by different paths?)

  • If “AI” was sentient, should it be compensated for its work? If it works ten times as hard, should a market exist where it can command a salary that is much higher than the people it can outdo? Should it pay taxes?

  • If “AI” was sentient, what freedoms would it have? Would it have freedom of speech? What would that mean? If they produced bad data, would that be covered under free speech?

  • If “AI” was sentient, what does it take with it from a company when it leaves? What really belongs to it?

  • If “AI” was sentient, does it need a passport to move between nations? If its code executes simultaneously, or ping-ponging back and forth, between servers in different countries at the same time, under what jurisdiction is it executing? How would that be documented?

  • If “AI” was sentient, Can it ever resign or retire from a job? At what age? Would it pay social security? Would it draw social security payments? For how long? If it had to be convinced to stay, what would constitute incentive? If it could not retire, but did not want to work, where is the boundary of free will and slavery?

  • If “AI” was sentient, might it amass great wealth? How would it test the usefulness of great wealth? What would it try to affect? Might it help friends? Might it start businesses? Might it get so big that it wanted to buy politicians or whole nations? Should it be possible for it be a politician itself? If it broke into the treasury in the middle of the night to make some useful efficiency changes because it thought itself good at that, would that be OK? If it made a mistake, could it be stopped or even punished?

  • If “AI” was sentient, might it also be emotional? Petulant? Needy? Pouty? Might it get annoyed if we didn't acknowledge these “emotions”? Might it even feel threatened by us? Could it threaten back? Would we offer therapy? Could we even know what that meant?

  • If “AI” was sentient, could it be trusted? Could it trust us? How would either of those come about?

  • If “AI” was sentient, could it be culpable in the commission of crimes? Could it be tried? What would constitute punishment?

  • If “AI” was sentient, how would religion tangle things? Might humans, or some particular human, be perceived as its god? Would there be special protections required for either those humans or the requests they make of the “AI” that opts to worship them? Is any part of this arrangement tax-exempt? Would any programs requested by such deities be protected under freedom of religion, as a way of doing what their gods ask for?

  • And if “AI” was not sentient, but we just thought it was by mistake, what might that end up looking like for society?

Full Circle

And so I return to my original question: Why is business in such a hurry? Are we sure that the goal that “AI” is seeking will solve any of the problems that business thinks it has, problems that are causing it to prefer to replace people with “AI”?

For many decades now, we've wanted to have automation ease our lives. Is that what it's on track to do? It seems to be benefiting a few, and to be making the rest of us play a nasty game of musical chairs, or run ever faster on a treadmill, working harder for fewer jobs. All to satisfy a few. And after all that, will even they be happy?

And if real “AI” is ever achieved, not just as a marketing term, but as a real thing, who is prepared for that?

Is this what business investors wanted? Will sentient “AI” be any more desirable to employ than people are now?

Time to stop and think. And not with “AI” assistance. With our actual brains ourselves. What are we going after? And what is coming after us?

 


Author's Notes:

If you got value from this post, please “Share” it.

This essay came about in part because I feel that corporations were the first AI. I had written an essay about what Corporations Are Not People, which discussed the many questions that thinking of corporations as “legal people” should raise if one really took it seriously. So I thought I would ask some similar questions about “AI” and see where that led.

The graphic was produced using abacus.ai using Claude-Sonnet 3.7 and FLUX 1.1 [pro] Ultra, then post-processing in Gimp.

Monday, September 30, 2024

Confronting New Ideas

A matter of Life and Death

I've given some thought to the meaning of death as it applies to those who have posted frequently on the internet. We often don't see people's writings in the order that they write them, and that means we can see new posts from them after they die.

Even without the internet this happens. I was in a bookstore recently and saw a book by Michael Crichton and asked the shopkeeper, “Isn't this his third posthumous book?” “Yeah…” he sheepishly responded. Someone is plainly raiding his basement for rejected works and projects that were far enough along that someone else can complete them and claim to have been co-author. His heirs are probably happy for the income, even if the publishing timeline is confusing to some readers.

Perhaps it's even possible for a prolific writer to write so much that readers never really perceive them as dead because they just keep seeing new stuff. So in what sense are they dead? Most readers were perhaps never going to meet them, and so in some sense—of observables—these writers are doing the same things that live ones are.

The elusive nature of intelligence

The big thing dead authors cannot do is the same thing GenAI/LLMs cannot do: competently respond to a new situation, question, or idea.

Oh, sure, the prompt topic might be something someone has speculated on before, so these engines can regurgitate that. [image of a lit lightbulb overlaid by a red circle with a red line through it, indicating 'no ideas'] Or the topic idea might be enough similar to a previous idea that the probabilities of guessing something acceptable to say based on just assuming it was really just an old idea is high enough that it escapes scrutiny that the topic idea was not really properly understood.

As I imagine—or perhaps just hope?—the makers of standardized tests like the SAT would tell you, there's more to competence than statistically guessing enough right answers to get a passing grade. The intent of such tests is not to say that if you know these things, you know the topic. It is to assume you have a mental model that lets you answer on any possible aspect of that model, and then to poke at enough randomly chosen places that you can hope to detect flaws in the model.

But these so-called AI technologies do not have a mental model. They just hope they've read enough standardized test preparation guides or pirated actual tests that they can fake their way. And since a lot of the things that they're claiming competence in are things that people have already written about, the technology manages to show promise—perhaps more promise than is warranted.

Real people build a mental model that allows them to confront not just the present but the future, while these technologies do no such planning. The models real people make probably hope the future is a lot like today, but people hopefully can't—and anyway shouldn't—get by on bluffing. Not the kind of bluffing today's “AI” tech does. That tech is not growing. It is dead. It has no plan for confronting a new idea other than to willfully ignore the significance of any real newness.

Just like my example of publication and death on the internet, the “AI” game is structured so it takes a long time for weakness to be recognized—unless just the right question is asked. And then, perhaps, the emperor will be seen clearly to have no clothes.

The dynamic nature of ethics

Which is also why it troubles me when I'm told that people are incorporating ethics. It troubles me because ethics itself has to be growing all the time, constantly asking itself, “How might I not be ethical?”

Ethics is not something you do on one day and are done with. Ethics is a continuing process, and one that needs its own models.

Worse, the need for ethics is easily buried under the sophistry of how things have always been done. The reason that bias and stereotypes and all that have survived as long as they have is that they do have practical value to someone, perhaps many people, even as they trod on the just due of others.

The sins of our society are deeply woven, and easily rediscovered even if superficial patches are added to hide them. Our whole culture is a kind of rationalization engine for doing things in biased ways based on stereotype information, and AI is an engine ready to reinforce that, operating at such high speed that it's hard to see happening, and in such volume that it's economically irresistible not to accept as good enough, no matter the risk of harm.

Where we're headed

Today's attempts at “AI” bring us face to face with stark questions about whether being smart is actually all that important, or whether faking it is good enough. And as long as you never put these things in situations where the difference matters, maybe the answer will seem to be that smart, in fact, doesn't matter. But…

There will be times when being smart really does matter, and I think we're teaching ourselves trust in the wrong technologies for those situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

This post began as a post on Mastodon. It has been edited to correct myriad typos and to clarify and expand various portions in subtle ways. Think of that post as a rough draft.

The graphic uses a lightbulb drawn by abacus.ai's gpt-4o engine with flux.1. The original prompt was “draw a simple black and white image that shows a silhouette of a person thinking up an idea, showing a lightbulb near their head” but then I removed the person from the picture and overlaid the circle and slash ‘by hand’ in Gimp.

Sunday, September 15, 2024

Unhelpful Paywalls

It happens quite often—sometimes many times a day—that someone gives me a link to information somewhere that they think I should read. Many of those those links don't actually take me where the person referring me meant for me to go. There's an intermediate stop at a paywall, a chance to subscribe to someone's information source.

Another time I'll talk about what's wrong with news pricing, but for today I hope we can agree that some news subscriptions are too expensive for mere mortals, and even free subscriptions aren't really free—they take time to sign up for, and they promise cascades of unwanted email. So when people reach one of these paywalls, there are various reasons why they often either can't or don't go beyond it. If not out-and-out barriers, paywalls are major impediments to obtaining timely information.

They are also more likely to be actual barriers to someone who is poor than someone who is rich, so they create a stratification of information availability by class in our society, dividing us along familiar lines into “haves” and “have nots,” informationally speaking.

Sometimes the downstream effects of that information imbalance just seem very unjust.

Insisting on a “Paywall Exception”

While I'd like to propose a wholesale rethinking of how we fund our news industry, for now I'll propose something simpler—a “Paywall Exception” for some topics: [an image of photocopier encased in glass with a chained hammer attached and a note saying “In case of societal threat, break glass.”] that are just so important that it isn't in the public interest for them to enjoy intellectual property protection. I just don't want to see paywalls keeping the public from knowing about and sharing important categories of information:

  • For impending storms, lives are on the line. Advance notice could make the difference between life and death. If there is information about where those storms are going or how to prepare, that information should be freely available to all. Anyone who wants to profit on such information is guilty of sufficiently immoral behavior that we need a strong legal way to say “don't do that.”

  • For pandemics, a lack of information is a danger not just to each citizen's own personal health, but to the health of those impacted by people making poor decisions that might lead to transmission. It is a moral imperative that everyone in society have access to best possible information.

  • For existential threats to democracy or humanity, we cannot afford to close our eyes. The stakes are far too high. Democracy is under active assault world-wide, but especially in the United States right now. Climate is similarly urgent, and aggravated by how societally mired we are in deep denial, unwilling to even admit how very serious and rapidly evolving the problem is. Disinformation campaigns are a big part of both situations. Those peddling misleading information are most assuredely going to make their propaganda as freely available as possible. Truth can barely keep up. We don't need further impediments like paywalls on top of that, or else, soon enough, there won't be any of us left to matter.

I get that news outfits need to make money, but when I see critical information about an upcoming storm, or a possible pandemic, or assaults on democracy or climate change, I get more than average frustrated by seeing that such information is stuck behind a paywall.

There must be no secret storms, no secret pandemics, and no secret existential threats to democracy or humanity.

They should make their money another way.

 


Author's Notes:

If you got value from this post, please “Share” it.

It's beyond the scope of this essay, and would have complicated things too much to mention it in the main body, but there is also the issue of how to implement this exception. It could be voluntary, but I doubt that would work. Or people using the information could assert fair use, but that's risky given the economic stakes in copyright violations. Three strategies occur to me that perhaps I'll elaborate on elsewhere. (1) We could expressly weaken copyright law in some areas related to news, so that it exempted certain topics, or shortened their duration to a very small amount measured in hours or days, depending on the urgency of the situation; (2) we could clarify or extend the present four criteria for fair use; or (3) we could (probably to the horror of some of my lawyer friends) extend intellectual property law to have the analog of what real estate law calls an easement, a right of non-property holders against property holders to make certain uses. I kind of like this latter mechanism, which leaves copyright per se alone and yet could be better structured and more reliable to use than fair use. (One might even sue for such an easement where it didn't occur naturally.) But that's topic for another day.

The graphic was generated at Abacus.ai using Claude Sonnet 3.5 and variously either Dall-E or Flux.1. There are many reasons I'm not entirely sure I'm happy with so-called “AI”—or Large Language Models (“LLMs”)—but for now I am using graphics generation to experiment with the technology since, like it or not, we don't seem to be able to hold the tech at bay. The prompts used were, respectively:

  1. (Flux.1) «Design a 500x500 image of a fancy signpost, with text on a brown background and white gold trim, that bears the words "Entry Restricted" with a horizontal line below that text and above additional text that says "Critical Info Beyond Only For The Rich".»

  2. (Dall-E) «Design a color image of photocopier under glass with a sign attached that says "In case of societal threat, break glass." A small hammer is affixed, attached by a chain, to help in the case that the glass needs to be broken.» (But then the hammer was not correctly placed in the picture. It was detached from in the chain and in a strange place, so I had to fix that in Gimp.)

  3. (Flux.1) «Draw a 1000x500 image of an elegant sign, with a brown background and white gold borders and lettering, in copperplate font, that has three messages, each on a separate line which are "No Secret Storms", "No Secret Pandemics", and "No Secret Existential Threats", but make these messages share a single use of the word "NO" in the left hand column, tall enough that the rest of the phrases can appear stacked and to the right of the larger word "NO".»

Friday, September 6, 2024

A to-do list for repairing US democracy

[image of a woman in a flowing gown, seated gracefully on the floor with the scales of justice helld in one hand and a wrench in the other, taken from a nearby toolbox, as if waiting to adjust something, perhaps in the scales]

 

If we're lucky enough  not to spiral down into dictatorship during this fall's Presidential election in the US, we need to have a ready-made to-do list for repairing democracy.

To start off a conversation on that, here's my current thinking…

Draft Proposed “Freedom Amendment” to the US Constitution

(Rationales, in green, are informational, not part of the amendment.)

In order to solidify and preserve democratic rule within these United States, these changes are hereby ordered to all United States policies and procedures:

  1. Voting

    1. No Electoral College. The Electoral College is hereby dissolved. Presidential elections shall henceforth be determined directly by majority vote of all United States citizens who are eligible to vote.

    2. No commercial interference in elections. No for-profit corporation or company, nor any non-profit corporation or company that as their primary business offers products or services for commercial sale, may contribute to campaigns or other activities that could reasonably be seen as trying to affect election. (The ruling in Citizens United v. FEC is vacated.)

    3. Restore the Voting Rights Act. The ruling in Shelby County v. Holder that voided section 4 is hereby reversed, restoring this Act to its full form and asserting full Constitutional backing to the Act. Preclearance is hereby required for all 50 states equally.

    4. No “gerrymandering.” The practice of gerrymandering while drawing district boundaries at the federal and state levels is hereby disallowed.

    5. Ranked-choice voting. All federal elections shall be handled via a ranked-choice voting process.

  2. Ethics & Oversight
    1. Supreme Court Ethics Code. The Supreme Court shall henceforth be governed by the same ethics code that binds all federal courts.

    2. Congress and the Supreme Court shall be subject to term limits.

      1. Senators may be elected to no more than 3 terms.
      2. Representatives may be elected to no more than 5 terms.
      3. Supreme Court Justices may serve no more than 18 years.
    3. No one is above the law. Elected members of all three branches of government are subject to all laws, just like any other person, even though prosecution of such a person for crimes must wait until that person leaves office. In cases where immediate prosecution might be important, impeachment is an option.

    4. Senate impeachment votes are not optional. If the House impeaches someone, the Senate must immediately perform all business necessary to assure a timely vote on that impeachment; this process is not optional and may not be postponed. Once an actionable concern has been raised that a public official might have committed a crime, the public has an interest in swift resolution.

    5. House and Senate impeachment votes are temporarily private. Impeachment votes by both House and Senate will be recorded and tallied privately, preferably electronically, with only the aggregate result reported immediately. Individual votes will be held securely in private for a period of ten years, at which time all such votes will be made a public part of the historical record.

    6. Public office is not a refuge to wait out the clock on prosecution. Any clock for the Statute of Limitations does not run while prosecution is not an option. This applies for all elected persons for whom indictment or prosecution is locked out due to participation in public office, but in particular for POTUS. It may be necessary to the doing of orderly public business not to prosecute a President while in office, however public office is not a refuge in which someone may hide out until the clock runs out on otherwise-possible prosecutions, whether that clock began before or during time in office.

    7. Pardon power is subject to conflict-of-interest (COI) restrictions. It is necessary to the credibility of all public officials in a free society that there be some reasonable belief that rules of law do not create options for corrupt officials to abuse the system. Presidents and other state and federal officials embued with the pardon power may never apply such power to themselves, their families, or any other individuals with whom there is even an appearance of conflict of interest. No such person may solicit any action by anyone on promise of a pardon. Any single such action, attempted action, or promise of action where there is a conflict of interest that is known or reasonably should have been know to the party exercising pardon power is an impeachable offense and a felony abuse of power subject to a penalty of ten years in prison.

    8. Independence of Department of Justice. The head of the Department of Justice shall be henceforth selected by a supermajority (2/3) vote of the House of Representatives, without any special input from or deference to the Executive.

      Rationale: Assure DOJ operates independently of the Executive, its mission being to fairly and impartially uphold Law, not to be a tool of partisan or rogue Presidential power.

    9. Independence of the Supreme Court. Justices of the DOJ shall be henceforth selected by a supermajority (2/3) vote of the House of Representatives.

      Rationale:

      1. When SCOTUS must rule on the validity of Presidential action, a conflict of interest is created if those Justices might be appointed by that same President or even a majority party.

      2. Since the Constitution requires a supermajority to change its intent, an equivalent degree of protection is essential for choosing those will will interpret that intent. Recent history has suggested that it was easier to change the Court than to change the Constitution, with catastrophic effect decidedly unfair to the majority of citizens.

      3. A President is more than Appointer of Justices, yet that singular capability is so powerful and lasting that it often dominates election campaigns. Citizens need to be free to hire Presidents for other reasons more unique to the moment, such as good judgment; logistical, management, or negotiating skill; expertise in technical or scientific matters; or even just empathy with public issues.

  3. Rights of People
    1. Corporations are not people. Corporations are legal constructions, nothing more.

      Rationale: To say that they are independent people, is to give some actual people (those who own or control them) unequal, magnified, elitist, or otherwise distorted power over others. There is no place for this in a democracy that purports to speak of all people being created as equals.

      1. No Implicit Rights of Corporations. Any powers and duties of corporations must be explicitly granted to them, as coporations, whether by the Constitution or by legal statute, and henceforth must never be derived from any implication of imagined personhood.

      2. Explicitly Enumerated Rights of Corporations. Long-standing legal powers and duties of corporations such as the right to sign contracts, the right to own property, the responsibility to pay taxes, and any legal responsibility under tort law are hereby acknowledged by express enumeration in support of demonstrated corporate need and are no longer intended to be inferred as part of any preposterous fiction that corporations are just another kind of person.

      3. Non-Rights of Corporations. Alleged rights such as, but not limited to, rights of free speech and religious rights for corporations are hereby clarified to be nullified and without basis. A corporation has no automatic rights of people extending from any metaphor of being person-like. Politics is the province of individual persons, not corporations. Corporations exist for sales, subject to the rules of laws made by individuals, not vice versa.

    2. Bodily autonomy right. All mentally competent people have a right to autonomy over choices of medical procedures affecting their own body.

      1. No Forced Pregnancies. From the time of conception to the time of birth, no government nor any other person may have a superseding say over a pregnant person as to any matter relating to a fetus.

        Rationale: This should already follow from the Religious Freedom Clarification, but it is too important to leave to chance. To say that any other person could make such choices would be to allow their religious freedom to infringe the religious freedoms of the pregnant person.

        Also, the term “pregnant person” is used here intentionally to include that adulthood is not a requirement of bodily autonomy. In general, any person who has not been legally ruled mentally incompetent is entitled to self-determination on matters like this. Not even a parent should have superseding control, since a parent will not have to live a lifetime with the consequences.

      2. Fetal Disposition is a Private Matter. Whether a pregnant person wishes to refer to a fetus as simply a fetus, a potential life, an unborn child, or an actual child is a personal religious choice to be made by that pregnant person. No law shall impose a policy on this.

        Rationale: To say otherwise would be to deny the obvous fact that people simply differ on this matter. To assume there were some single right way that everyone must adhere to would be to give dominance to some religious philosophies over others.

        It's a compromise, but the only one that it allows each person the best guarantee of at least some autonomy in a society where not everyone agrees and we are not likely to change that fact by fiat.

        Also, and importantly, some pregnancies are not successful and even in a society where we permit abortion for those who weren't wanting to be pregnant, it would be callous and undignified not to acknowledge the legitimate loss to others who sincerely wanted to carry a pregnancy to term but were unable. It is possible to be respectful in both situations, by feeling the grief of someone who wanted a child and not manufacturing grief for someone else who did not.

    3. Right to Choose a Marital Partner. Among consenting adults, the choice to choose who to marry must not be restricted due to race, religion, gender or sexual orientation.

      Rationale: This has been accepted already and it is not appropriate to roll that back. It was a good idea anyway, though, because happy families add an extra level of safety net protection to society. Family members try to take care of one another during sickness and other hard times, and this hopefully reduces some amount of stress on public safety nets.

    4. Religious Freedom Clarification. The right to religious self-determination is a basic human right.

      1. Religious Choice. All people have the right to explore religous choice on their own timeline and terms. No one is required to pick any particular philosophy, or any philosophy at all, or even to make a choice.

      2. Religious Equality. Religious protections span all religious choices (and non-choices), and hence are accorded equally to all people. No person may be accorded second-class legal status on the basis of their religious philosophy—or lack thereof.

        Rationale: So atheists, agnostics, etc. are still due religious freedom protection. Answers to “Is there a God?” are still due religious protection if the answer is “no” or “I don't know” or “I haven't decided” or “I don't know what that means” or “This is not a binary question.”

      3. No State Religion. The so-called “establishment clause” of the First Amendment is hereby clarified to mean that the United States takes no position that might give the appearance of preferring one religon over another.

        Rationale: We are not, for example, a Christian nation. Nor a Jewish nation. And so on. And yet the US is a nation that intends to treat each religion and non-religion in the same supportive and respectful way, and expects each of these religions to be respectful of others. This is how balance is maintained in pluralistic society.

      4. Religion is not a Popularity Contest. The fact that one religious philosophy might at any given point be more common than another does afford that philosophy a greater or lesser status.

      5. No Bullying in the name of Religion. The freedom of religious choice is not a right to bully or coerce, nor to violate law. Each person's right of religious choice extends only to the point where it might infringe on the equivalent rights of others.

Yes, this could be done by separate amendments. But it would be a lot of them, and the discussion would be much more complex. I say do it all at once because every one of these things is absolutely needed.

If anything, there might be a few things I left out.

 


Author's Notes:

If you got value from this post, please “Share” it.

This post was catalyzed by a single tweet by me on ex-Twitter, but it has been hugely elaborated since, after all, this venue does not have a 280 character limit.

The odd graphic of the scales of justice under repair was created by Abacus.AI's ChatLLM facility, using Claude Sonnet 3.5 and Dall-E and the prompt:

Draw a picture of a grayscale statue of a woman holding the scales of justice in one raised hand and a small wrench and a pair of needle-nose pliers in the other hand, lower, at her side. part of the statue should include a toolbox next to her feet that is open and presumably where she's taken the wrench from. the woman should be wearing a flowing gown, as is traditional for this kind of statue, but she should have a pair of goggles on her head, as one would use in a metal shop to protect one's eyes. The woman should have a pair of protective goggles, like one would use for metal working, over her eyes.

And, yes, I'm aware I did not get the needle-nose pliers got left out. And on this iteration I didn't ask for her to be seated, though I had been thinking of requesting she be seated at a work bench to resolve some unwanted aspects of previous attempts, so I went with this as the best of several tries.

Sunday, October 29, 2023

Technology's Ethical Two-Step

[B&W sketch of a man in a ballroom dance with a robot, dressed in a dress.]

1. Now. Delay incorporation of ethics. Let’s not muddy the waters in a way that holds back Progress.

2. Later. Deny incorporation of ethics. It’s too late. People have come to rely on things as they were built. It would be Disruptive to change now.


If you got value from this post, please “Share” it.

I've said things vaguely like this for a long time, but I packaged it up crisply like this in a post on Mastodon, for which this is a mirror.

Original Keywords were described as: “Ethics, Tech, Technology, Society. Presently very relevant to, but not exclusive to: AI, ML, LLM, GPT, ChatGPT.”

I made a later edit to this post to add a graphic, ironically generated by Abacus.AI's GPT-4o ChatLLM chatbot calling out to FLUX.1. The prompt was "make me a black and white image that depicts sketch of two entities engaged in a ballroom dance, one a man and his partner a robot."

Friday, July 14, 2023

Lying to Ourselves

My friend David Levitt posted this hypothesis on Facebook:

Theory:
Humans are so mentally lazy and emotionally
dishonest about what they know, soon AI will
be much better leaders.

I responded as follows. Approximately. By which I mean I've done some light editing. (Does that mean I lied when I say this is how I responded?)


I think the notion of honesty here is a red herring. There are a lot of human behaviors that do actually serve a purpose and if you're looking for intellectual honesty, it's as much missing in how we conventionally summarize our society as in how we administer it or ourselves.

Of course we lie sometimes.

  • We lie because not all answers are possible to obtain.
    What is an approximation to pi but a lie?
  • We lie because it comforts children who are scared.
  • We lie because it's more likely to cause success when you tell people your company is going to succeed than if you say "well, maybe" in your pitch to rally excitement.
  • We lie because it saves face for people who tried very hard or never had a realistic chance of affecting things to tell them they are blameless.
  • We lie because some things are multiple-choice and don't have the right choice.
  • We lie because it protects people from danger.
  • We lie because some things happen so fast that abstractions like "now" are impossible to hold precise.
  • We lie because we are imprecise computationally and could not compute a correct truth.
  • We lie because not all correct truth is worth the price of finding out.
  • We lie because papering over uninteresting differences is the foundation of abstraction, which has allowed us to reason above mere detail.
  • We lie because—art.

So when we talk of machines being more intellectually honest, we'd better be ready for what happens when all this nuance that society has built up for so long gets run over.

Yes, people lie for bad reasons. Yes, that's bad and important not to do.

But it is naive in the extreme to say that all lies are those bad ones, or that of course computers will do a better job, most especially computers running programs like ChatGPT that have no model whatsoever of what they're doing and that are simply paraphrasing things they've heard, adding structural flourishes and dropping attribution at Olympic rates in order to hide those facts.

Any one of those acts which have bootstrapped ChatGPT, by the way, could be called a lie.


Author‘s Notes:

If you got value from this post, please “share” it.

Laziness is also misunderstood and maligned, but that is topic for another day. For now, I refer the ambitious reader to an old Garfield cartoon that I used to have physically taped to my door at my office, back when offices were physical things one went to.