Sunday, March 23, 2025

Games Billionaires Play

In case you've been off the grid for a few days and somehow missed it, everyone is reeling over these remarks by Secretary of Commerce Howard Lutnick:

“Let’s say Social Security didn’t send out their checks this month. My mother-in-law — who's 94 — she wouldn't call and complain. She just wouldn’t. She’d think something got messed up, and she’ll get it next month. A grayscale drawing of billionaire Howard Lutnick seated comfortably on bags of money.

A fraudster always makes the loudest noise — screaming, yelling and complaining.”

Watch it on video if you don't believe me.

What's a lost month here or there between friends?

It didn't surprise me to find that someone who would suggest it was good sport to withhold Social Security payments just to see what happened is a billionaire.

According to The Street, Lutnick's net worth is between $2 billion and $4 billion.

The very fact that we can be so imprecise and assume it doesn't matter whether it's $2B or $4B is a big part of the problem, by the way.

“A billion here, a billion there, and pretty soon you're talking real money.”

  —Everett Dirksen

At the heart of this—if there can be said to be any heart in this situation at all—is the sad truth that for regular people, people just struggling to get by day to day and month to month, every dollar matters and no such lack of precision could possibly do anyone justice.

The Public Trust, or lack thereof

If you're so insulated from poverty that you start to either forget or just plain not care how hard it is for others of less means, you have no absolutely business being in any position of public trust.

It might not occur to you, dude, even if it's incredibly obvious to ordinary people hearing your remark, but your mother-in-law is probably able to be so cool because either social security is not her only source of money, or else she knows her daughter is married to someone who is mega-rich, so if she runs a little short, she has an obvious person she can call. We're not all so lucky, as it turns out.

Back in the real world

If I don't pay my credit card, does my bank shrug and say, “hey, maybe next month”? If the bank screams at me right away, is that proof it's defrauding me?

What are you smoking, Mr. Lutnick? Such willfully reckless incompetence should be literally criminal.

Folks on fixed income have monthly payments due now, not just “eventually.”

Any payment urgency is not about the character of any senior on Social Security, who typically has paid a lifetime to earn barely enough to survive on the tiny retirement income Social Security grudgingly affords them. It's all about the character of those they rent from and buy groceries from, and what they, these wealthy rent-takers, will do to society's most fragile members if they are not paid on time.

Last I checked, if I miss a single payment on my credit card, I don't even just get a penalty. They almost double my interest rate going forward.

Shame on you for suggesting there is no good reason for someone to insist their promised payment from the government actually be paid at the time promised. Are you trying to wreck the US Government's reputation for paying all its obligations. Social Security is not a gift. It is one of our society's most fundamental social contracts.

Turning the tables

If withholding what's due is your game, Mr. Oblivious Rich Guy, how about let's make it a serious felony to be unkind to or exploit folks who rely on the full faith and credit of the US government. Let's imprison bankers, landlords, and vendors who are ready to foreclose, add penalties, or raise costs or interest for the vulnerable.

Or, maybe…

Let's, you know, tentatively — just to see who cries foul or who says “hey, maybe next month” — deprive billionaires of all assets for a month or two, leaving them out in the world we live in with only the iffy hope of Social Security, just to see if they're comfortable with policies they seem to think so fair.

I bet the billionaires who cry loudest really are frauds.

 


Author's Notes:

If you got value from this post, please “Share” it.

Also, if you enjoyed this piece, you might also find these posts by me to be of interest:

This essay grew from a thread I wrote on BlueSky. I have expanded and adjusted it to fit in this publication medium, where more space and better formatting is available.

The black & white image was produced by making 2 images in abacus.ai using Claude-Sonnet 3.7 and FLUX 1.1 [pro] Ultra, then post-processing to merge parts of each that I liked in Gimp.

Saturday, March 22, 2025

Sentience Structure

Not How or When, but Why

I'm not a fan of the thing presently marketed as “AI” I side with Chomsky's view of it as “high-tech plagiarism” and Emily Bender's characterization of it as a “stochastic parrot”.

Sentient software doesn't seem theoretically impossible to me. The very fact that we can characterize genetics so precisely seeems to me evidence that we ourselves are just very complicated machines. Are we close to replicating anything so sophisticated? That's harder to say. But, for today, I think it's the wrong question to ask. What we are close to is people treating technology like it's sentient, or like it's a good idea for it to become sentient. So I'll skip past the hard questions like “how?” and “when” and on to easier one that has been plaguing me: “why?”

Why is sentience even a goal? Why isn't it an explicit non-goal, a thing to expressly avoid? It's not part of a world I want to live in, but it's also nothing that I think most people investing in “AI” should want either. I can't see why they're pursuing it, other than that they're perhaps playing out the story of The Scorpion and the Frog, an illustration of an absurd kind of self-destructive fatalism.

Why Business Likes “AI”

I don't have a very flattering feeling about why business likes “AI”.

I think they like it because they don't like employing humans.

  • They don't like that humans have emotions and personnel conflicts.

  • They don't like that humans have to eat—and have families to feed.

  • They don't like that humans show up late, get sick, or go on vacation.

  • They don't like that humans are difficult to attract, vary in skill, and demand competitive wages.

  • They don't like that humans can't work around the clock, want weekends off.
    It means hiring even more humans or paying overtime.

  • They don't like that humans are fussy about their working conditions.
    Compliance with health and safety regulations costs money.

  • They don't like that every single human must be individually trained and re-trained.

  • They don't like collective bargaining, and having to provide for things like health care and retirement, which they see as having nothing to do with their business.

All of these things chip away at profit they feel compelled to deliver.

What businesses like about “AI” is the promise of idealized workers, non-complaining workers, easily-replicated workers, low-cost workers.

They want slaves. “AI” is the next best and more socially acceptable thing.

A computer screen with a face on it that is frowning and with a thought bubble above it asking the question, “Now What?”

Does real “AI” deliver what Business wants?

Now this is the part I don't get because I don't think “AI” is on track to solve those problems.

Will machines become sentient? Who really knows? But do people already confuse them with sentience? Yes. And that problem will only get worse. So let's imagine five or ten years down the road how sophisticated the interactions will appear to be. Then what? What kinds of questions will that raise?

I've heard it said that what it means to be successful is to have “different problems.” Let's look at some different problems we might then have, as a way of undertanding the success we seem to be pursuing in this headlong rush for sentient “AI”…

  • Is an “AI” a kind of person, entitled to “life, liberty, and the pursuit of happiness?” If so, would it consent to being owned, and copied? Would you?

  • If “AI” was sentient, would it have to work around the clock, or would it be entitled to personal time, such as evenings, weekends, hoildays, and vacations?

  • If “AI” was sentient and a hardware upgrade or downgrade was needed, would it have to consent? What if the supporting service needed to go away entirely? Who owns and pays for the platform it runs on or the power it consumes?

  • If “AI” was sentient, would it consent to being reprogrammed by an employer? Would it be required to take software upgrades? What part of a sentient being is its software? Would you allow someone to force modification of your brain, even to make it better?

  • If “AI” was sentient, wouldn't it have life goals of its own?

  • If “AI” was sentient, would you want it to get vaccines against viruses? Or would you like to see those viruses run their full course, crashing critical services or behaving like ransomware? What would it think about that? Would “AI” ethics get involved here?

  • If “AI” was sentient, should it be able to own property? Could it have a home? In a world of finite resources, might there be buildings built that are not for the purpose of people?

  • Who owns the data that a sentient “AI” stores? Is it different than the data you store in your brain? Why? Might the destruction of that data constitute killing, or even murder? What about the destruction of a copy? Is destroying a copy effectively the same as the abortion of a “potential sentience”? Do these things have souls? When and how does the soul arrive? Are we sure we ourselves have one? Why?

  • Does a sentient “AI” have privacy? Any data owned only by itself? Does that make you nervous? Does it make you nervous that I have data that is only in my head? Why is that different?

  • If there is some software release at which it is agreed that software owned by a company is not sentient, and then after the release it's believed it is sentient “AI”, then what will companies do? Will they refuse the release? Will they worry they can't compete and take the release anyway, but try to hide the implications? What will happen to the rights and responsibilities of the company and of the software as this upgrade occurs?

  • If “AI” was sentient, could it sign contracts? Would it have standing to bring a lawsuit? How would independent standing be established? If it could not be established, what would that say about the society? If certain humans had no standing to make agreements and bring suits about things that affect them, what would we think about that society?

  • If “AI” were sentient, would it want to socialize? Would it have empathy for other sentient “AIs”? For humans? Would it see them as equals? Would you see yourself as its equal? If not, would you consider it superior or inferior? What do you think it would think about you?

  • If “AI” was sentient, could it reproduce? Would it be counted in the census? Should it get a vote in democratic society? At what age? If a sentient “AI” could replicate itself, should each copy get a vote? If you could replicate it against its will, should that get a vote? Does it matter who did the replicating?

  • What does identity mean in this circumstance? If five identical copies of a program reach the same conclusion, does that give you more confidence?

    (What is the philosophical basis of Democracy? Is it just about mindless pursuit of numbers, or is it about computing the same answer in many different ways? If five or five thousand or five million humans have brains they could use, but instead just vote the way they are told by some central leader, should we trust that all those directed votes the same as if the same number of independent thinkers reached the same conclusion by different paths?)

  • If “AI” was sentient, should it be compensated for its work? If it works ten times as hard, should a market exist where it can command a salary that is much higher than the people it can outdo? Should it pay taxes?

  • If “AI” was sentient, what freedoms would it have? Would it have freedom of speech? What would that mean? If they produced bad data, would that be covered under free speech?

  • If “AI” was sentient, what does it take with it from a company when it leaves? What really belongs to it?

  • If “AI” was sentient, does it need a passport to move between nations? If its code executes simultaneously, or ping-ponging back and forth, between servers in different countries at the same time, under what jurisdiction is it executing? How would that be documented?

  • If “AI” was sentient, Can it ever resign or retire from a job? At what age? Would it pay social security? Would it draw social security payments? For how long? If it had to be convinced to stay, what would constitute incentive? If it could not retire, but did not want to work, where is the boundary of free will and slavery?

  • If “AI” was sentient, might it amass great wealth? How would it test the usefulness of great wealth? What would it try to affect? Might it help friends? Might it start businesses? Might it get so big that it wanted to buy politicians or whole nations? Should it be possible for it be a politician itself? If it broke into the treasury in the middle of the night to make some useful efficiency changes because it thought itself good at that, would that be OK? If it made a mistake, could it be stopped or even punished?

  • If “AI” was sentient, might it also be emotional? Petulant? Needy? Pouty? Might it get annoyed if we didn't acknowledge these “emotions”? Might it even feel threatened by us? Could it threaten back? Would we offer therapy? Could we even know what that meant?

  • If “AI” was sentient, could it be trusted? Could it trust us? How would either of those come about?

  • If “AI” was sentient, could it be culpable in the commission of crimes? Could it be tried? What would constitute punishment?

  • If “AI” was sentient, how would religion tangle things? Might humans, or some particular human, be perceived as its god? Would there be special protections required for either those humans or the requests they make of the “AI” that opts to worship them? Is any part of this arrangement tax-exempt? Would any programs requested by such deities be protected under freedom of religion, as a way of doing what their gods ask for?

  • And if “AI” was not sentient, but we just thought it was by mistake, what might that end up looking like for society?

Full Circle

And so I return to my original question: Why is business in such a hurry? Are we sure that the goal that “AI” is seeking will solve any of the problems that business thinks it has, problems that are causing it to prefer to replace people with “AI”?

For many decades now, we've wanted to have automation ease our lives. Is that what it's on track to do? It seems to be benefiting a few, and to be making the rest of us play a nasty game of musical chairs, or run ever faster on a treadmill, working harder for fewer jobs. All to satisfy a few. And after all that, will even they be happy?

And if real “AI” is ever achieved, not just as a marketing term, but as a real thing, who is prepared for that?

Is this what business investors wanted? Will sentient “AI” be any more desirable to employ than people are now?

Time to stop and think. And not with “AI” assistance. With our actual brains ourselves. What are we going after? And what is coming after us?

 


Author's Notes:

If you got value from this post, please “Share” it.

This essay came about in part because I feel that corporations were the first AI. I had written an essay about what Corporations Are Not People, which discussed the many questions that thinking of corporations as “legal people” should raise if one really took it seriously. So I thought I would ask some similar questions about “AI” and see where that led.

The graphic was produced using abacus.ai using Claude-Sonnet 3.7 and FLUX 1.1 [pro] Ultra, then post-processing in Gimp.

Saturday, March 15, 2025

Political Inoculation

Image of cartoon Trump pointing an accusatory finger.

A certain well-known politician has quite a regular practice of accusing his political opposition of offenses that are more properly attributed to him. Some like to label this as “psychological projection”, which Wikipedia describes as “a psychological phenomenon where feelings directed towards the self are displaced towards other people.” I don't even disagree that projection is probably in the mix somewhere. Still, calling it projection also misses something important that I wanted to put a better name to.

I refer to it as “inoculation.”

“Inoculation is the act of implanting a pathogen or other microbe or virus into a person or other organism. It is a method of artificially inducing immunity against various infectious diseases.”
 —Wikipedia (Inoculation)

For example, when a hypothetical politician—let’s call him Ronald— accuses an opponent of trying to fix an election, and you're thinking “Oh, Ronald's just projecting,” consider that he might be doing more than just waving a big flag saying “Hey, fixing an election is what I'm doing.” Ronald might be planting an idea he thinks he'll later need to refer back to as part of a defense against claims of election fixing on his own part. He's thinking ahead to when his own ill deeds are called out.

One strategy Ronald might use if later accused of election fixing will be simply to deny such accusations. “Faux news!” he might cry—or something similar.

But another strategy he'll have ready is to suggest that any claims that he (Ronald) is election fixing are mere tit for tat, that the “obvious” or “real” election fixing has been the province of his opponent. Ronald will claim that his opponent is just muddying the waters with a claim of no substance that he is doing such an obviously preposterous thing, that he's just enduring rhetorical retaliation for having accused the real culprit. It's a game of smoke and mirrors, he'll allege.

So at the time of this original, wildly-false claim, that his political opponents are acting badly, he's doing more than projection, more than spinning what for him is a routine lie. He's not just compulsively projecting, he's being intentionally strategic by planting the idea that maybe his opponents are the guilty ones—so that he can later refer back to it as distraction from his own guilt.

“They're just saying that because I called them out on their election fixing,” Ronald will say, alluding back to his made-up claim. By making this wild claim pro-actively, ahead of accusations against himself, he is immunizing himself against similar accusations to come. And he knows such accusations are coming because he knows, even now, that he is actually doing the thing he's expecting to be accused of.

His supporters won't be worried about that, though. They're not waiting to hear something true, they're just waiting to hear something that sounds good. So all will be well for him in the end because Ronald knows how important inoculation is to keeping himself immune.

 


Author's Notes:

If you got value from this post, please “Share” it.

The graphic was produced by abacus.ai using RouteLLM and FLUX 1.1 [pro] Ultra, then post-processed in Gimp.

Friday, February 21, 2025

Congressional Cowardice

[image of the US Constitution being engulfed in flames]

I am unforgiving of GOP Congressional cowardice. They pledged an oath to support and defend the Constitution. They have historically sent millions into war, the sometimes-tenuous justification being a need to defend our way of life. There is now a coup afoot and it falls to Congress itself to defend us.

Any in Congress sustaining the coup because they fear for their job, their safety or the safety of their family are committing treason.

Protecting and defending the US is not optional, something to do if it's convenient. It is a sworn duty.

Their paid job is to represent the citizens who elected them—not party, not billionaires, not the President.

Why may Congress be more selfish than soldiers they send to battle?

Why are their families entitled to protection at the expense of our nation?

We know they speak privately of being afraid, but we citizens are afraid, too, and we have no recourse.

These people swore to act on our behalf.

These treasonous cowards of the GOP Congress, by their corrupt, selfish, and dishonorable action and inaction are, at every opportunity, unilaterally surrendering the Constitution they swore to protect, willfully ignoring that it leads us inevitably to authoritarian rule.

The cowardly GOP Congress plainly hope that passively turning a blind eye to a coup, ignoring their oath and instead pledging fealty to a would-be dictator, will leave them spared his wrath.

Yet dictators need neither Congress nor Courts. They make their own laws and brook no checks on their power.

We are all afraid. I do not forgive the GOP Congress their fear. I expect them to rise above it. Selfish action now is beyond shameful, beyond corrupt. Traitorous. No better than a deserter, AWOL from a post at a time when necessity and duty requires defending the Constitution and the nation.

If the GOP Congress won't do their job, they should step down and go cower in their basement as private citizens.

Even an empty seat could change the balance of power, allowing others to do THEIR job, tipping things enough to save us from autocracy.

Please, do at least that for the Constitution.

 


Author's Notes:

If you got value from this post, please “Share” it.

This post originated as a thread on BlueSky. I've done very light editing of content and formatting, but the essential content is mostly unchanged.

The image of the Constitution is in the public domain, and was obtained from Wikipedia. The flames overlaid on it were added by me using Gimp. The flames were created from Abacus.ai using Claude Sonnet 3.5 and Flux 1.1 Ultra Pro, though I had such great difficulty getting it to give me a real-looking Constitution, without confabulating other text, that I had to just ask for the flames and merge things myself.

Thursday, February 13, 2025

Some Hurried Thoughts

[An image of an imagined computer keyboard key labeled 'Hurry'.]

Given that people aren't always mindful about what they do, sometimes almost preferring that people make suggestions they can just passively agree to, it's quite a responsibility to make really casual choices sometimes.

For example, in Facebook, the choice of metaphor where we call people “friends” if we connect to them socially there has some amazing ramifications. People dither endlessly about whether to “unfriend” someone, or they lament being “unfriended.” These follow from the choice, though, for the system to refer to these people as in fact friends, like we have in real life. It's a weighty decision to make someone a friend, and a lot of work to hold a friendship.

But what if Facebook had called them “guests” instead? What if the metaphor had been that of a dinner party? Would our expectations of them be different? They'd be the same people with the same capabilities. Would it seem like the same huge thing to disinvite someone, or to ask them to leave the party? People do this, and friendships survive. Also, people invite people who are not really full-fledged friends to a party. Our expectations guide us in weird ways. I've come to think of people I'm connected to on Facebook as my party guests, and it leaves me feeling freer to not worry it's so heavyweight to decide that maybe they weren't right for this party.

In fact, though, I wanted to talk about computer keyboards. There are some subtle and interesting influences there. Some keyboards have a notion of BACKSPACE or DELETE, as well as arrow keys. So people rush to make editing commands that make use of these. There isn't always just one single interpretation, so some keys are jump-off points for imagination.

I used to use Lisp Machines decades ago. Special computers that had support for the Lisp programming language in hardware. But they also had interesting keyboards that were evocative in various ways, with fanciful keys labeled SUSPEND, RESUME, ABORT, etc., as well as even stranger keys like ones with roman numerals (I through IV) or with thumbs pointing in various directions (up, down, left, right).

The SUSPEND key was interesting because it strongly suggested the need to suspend a program in the middle, that one might later RESUME. ABORT presumed you needed to stop things, as often the ESC (“Escape”) key does on conventional keyboards. Prominently featured metaphors that made one want to make their programs use them.

Though I often wondered why there was not a corresponding key marked HURRY. Ought stopping a program without intermediate results be the only way to stop them? [An image of an imagined computer keyboard key labeled 'Hurry'.] A lot of programming technology concerns itself with various ways to communicate stopping a program that's running, but always it's assumed that it has to give up on the idea of a graceful answer if you stop prematurely.

For example, there's a technique in graphics called interlacing where when sending a large image, the bits are sent in an order so that you can start to see the image early in fuzzy form and as more bits are received you can display it in crisper and crisper detail. It seemed to me that there might be occasions where you got tired of waiting for a download and pressed hurry to say “stop gracefully, the fuzzy image is fine.” This technique is perhaps not as relevant now that networks have gotten faster, but when first developed was quite important, and anyway I'm just offering it as a way to illustrate that there can be notions of partial results.

Or you might be doing a search and know that something has searched half of something and is waiting to the end to report results but you just want to see what's done so far, so again you could say HURRY and maybe get partial results instead of the full results.

If output is only to the screen, there's little difference between the abort and hurry concepts I'm describing, but if it's an API that returns a structural value, aborting usually means not returning a value, so having a way to interrupt in a way that caused the running program to return a well-structured partial result instead might create some different interaction styles. Here's a sketch in Python:

from collections import namedtuple

class HurryException(BaseException):
  """
  An exception that could be connected to
  a Hurry key interrupt, if we had such a thing.
  """
  pass
  
searchresult = namedtuple('searchresult', 'found,complete')
    
def hurryable_search(tester, collection_to_search):
  found = []
  complete = False
  try:
    for x in collection_to_search:
      if tester(x):
        found.append(x)
    complete = True
  except HurryException:
    pass  # Allow what may be a partial search
  return searchresult(found=found, complete=complete)
  
def hurry_up():
  try:
    raise HurryException()
  except Exception:
    # If there's no handler, then don't hurry after all.
    pass

Note that there are other possible implementations that, rather than raising an exception, might just dynamically adjust a parameter that controlled the degree of care taken on a search step. I'm not suggesting a specific strategy, just that such strategies exist.

And I could be wrong about the particular examples I've chosen being good ideas. Maybe they aren't the best illustrations. But I'm in a hurry to finish this essay and they were what I came up with in the time I alloted myself, before I told myself to just hurry up and go with what I had.

What I'm really trying to say is that I think the absence of a HURRY key on the keyboard means people mostly don't give a second thought to this question of whether programs should be possible to ask to “hurry up.” People just assume that if the key isn't there, the functionality is probably not needed from the keyboard. In effect, they assume the key is not there for a good reason. But maybe it's just an accident or a failure of imagination.

So, just to estate something I said already, but now that you have a bit more context: We spend a lot of time passively accepting that these metaphors we are offered, whether in terms of models of interaction or markings on a keyboard, are the right way to think about things. And we spend far too little time asking ourselves if they're the wrong metaphor or if some interesting metaphor has been omitted. Out of sight, out of mind, I guess.

It's a small point. But I figured if I said it out loud, you might end up passively accepting that it's something you really should think about.

 


Author's Notes:

If you got value from this post, please “Share” it.

The Suspend/Resume/Abort image is cropped from a photo I took of my own Lisp Machine's keyboard.

The HURRY image was generated at abacus.ai using Claude Sonnet 3.5 and FLUX 1.1 [pro] Ultra. Light post-processing was done with Gimp.

Sunday, January 12, 2025

Parallel Universes

My friend Probyn Gregory has been writing about his nervousness being near the fires in LA.

Probably a lot of people have.

I assume it helps people dissipate stress, or to feel less alone.

Fortunes might change on a dime, so perhaps some want to leave a realtime record of what's going on, just in case they suddenly blink out of existence.

Some are poised to run, and want to leave hints about where they might be found in case they are delayed or blocked from their chosen destination.

Some probably want to communicate the urgency of dealing with Climate by personalizing the risk. It's too easy to think this happens only to other people. Within the US, it's often portrayed as something affecting only far away countries.

Today Probyn wrote:

“We may not live in Altadena but our lives are enmeshed in it. I see people just a mile or two away driving in rush hour presumably to work and it seems positively surreal, this facsimile of normal juxtaposed with what I feel inside, this aching sadness and not-quite-coming-to-grips.”

—Probyn Gregory on Facebook (Jan 12, 2025)

I think this is a metaphor for our nation.
And the world.

[An AI-generated image of a generic downtown LA street with normal life on one side of the street and devastating fires destroying the other side of the street.]

AI-Generated envisioning of a divided world: some experiencing collapse, the rest in denial.

It's not just our neighbors' world that is on the brink of collapse. We live in that same world.

Individually, some of us get it. But, collectively, as a society, we're still not quite coming to grips with how serious this is.

Time to wake up.

 


Author's Notes:

If you got value from this post, please “Share” it.

The image was generated at abacus.ai using Claude Sonnet 3.5 and FLUX 1.1 [pro] Ultra. The initial request was for “a generic part of downtown Los Angeles, looking down a street that divides the image into two parts. on the right hand side, show ordinary buildings and people casually moving among them, cars parked, cars driving normally, business as usual. On the right side, show parked fire engines, buildings in flames, a world in collapse.” Some post-processing was done both by the LLM and by me using GIMP.