Showing posts with label LLM. Show all posts
Showing posts with label LLM. Show all posts

Friday, October 25, 2024

Pretty Messed Up

I needed a graphic for another of my posts, so I asked an “AI” (really just a Large Langugage Model, or LLM).

Sketch a grayscale image of a wall calendar for november 2024.

This is what I got. I cropped it and reduced the resolution slightly.

A very pretty calendar that has a lot of wrong information on it.

It's pretty. But it's messed up.

  • Days are not lettered right.
  • Numbers are not in the right order and are duplicated.
  • Starts on wrong day of the week.
  • There should be at most two ragged line lengths in a month, one at the start, one at the end.
  • Underneath the month at the top is a line that says something about Thanksgiving but blurs out what day it is.
  • It shows Thanksgiving on the 29th. The latest possible first Thursday is the 7th. Three weeks after that is the 28th.

This was done with Abacus.ai's Claude Sonnet 3.5 using Flux.1.

So I thought maybe I could just get Google to make me a calendar. I forgot somebody would want to sell me one. My search turned up these, among others. No wonder the first one is on sale. It doesn't have the right start date for the month either.

A display of two calendards offered for web purchase, where the first doesn't have the days in the right place, but sells for a lot less.

I wonder if this is just business as usual or part of some disinformation campaign for the election.

 


Author's Notes:

If you got value from this post, please “Share” it.

Friday, October 18, 2024

Green AI

I don't believe in “Green AI.”

It's not that it's impossible to do the things people are calling Green AI. [A red circle with a red slash through it, with the letters AI in green behind the slash, indicating 'No Green AI'.] Rather, it's that I'm not willing to call those things “green.”

Some of the most technologically capable people in the world see the environmental challenge that Large Language Models (LLMs) pose and think “I should make a green data center for this new project.” Then they buy offsets—a horror I'm not going to address here—or they actually invest money to make a new and allegedly green data center.

The thing is, humans didn't—and don't—really need AI. Human society worked fine without it. And those technologists could be solving preexisting problems that are still there but now perceived as someone else's problem.

New ‘green data centers’ for AI represent both the creation and the solution of a problem that didn't exist, leaving the world with as many probiems as before but also leaving the world with fewer technologists focused on the problems human society faces because those technologists are resting on their laurels—as if solving these problems—problems that needn't have existed—helped something other than their consciences.

AI and its associated effort has a big opportunity cost, stealing from the body of people who could solve others' problems. Myriad companies around the world are diverting effort from what they normally do to explore how not to have AI leave them behind. That effort and cost isn't solving the Climate Crisis either. It is plundering our best and brightest for noncritical problems.

Meanwhile Climate Change is killing us. We have real and immediate problems that LLM-style AI can't solve.

I say it can't because, as Chomsky so aptly puts it, it's a “plagiarism” engine. If, like me, you think Chomsky is right, then it's easy to conclude that if a solution was there to plagiarize, that solution could have already saved us. LLMs are not performing new and immediately trustable computation of the kind we need for Climate, they're just blurring and regurgitating already-existing, often even already-tried, thought.

Makework and waste and distraction are the key elements here, and none of that is helping. And, yes, enormous resource use makes it worse. But my point is that the resource spent isn't just on a problem we needn't have sought to solve, which would be bad enough, but addressing that steals human resources from problems we do need to solve.

There's a denialist belief that down the road things will pay off. But human civilization may not have that long to wait. The climate crisis is now. It will not wait. We need all hands on deck solving that, not distracted by a problem that, while intriguing, isn't yet mature enough to help.

Big Tech needs to solve existing problems, not make new ones, solve those new ones, and then collapse exhausted, leaving everyone else out here in the land of Little Or No Tech to solve the existing problems that were here in the first place, but without any help.

 


Author's Notes:

If you got value from this post, please “Share” it.

This post began as a post on Mastodon. I did light editing to re-host the essay here. Think of that one as a rough draft.

I created the graphic in Gimp, starting from a circle with a line through it that began as an SVG image that one of the chatbots at Abacus.ai made for me one day when I was exploring how to use it. The code for that is just:

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
  <circle cx="50" cy="50" r="45" stroke="red" stroke-width="10" fill="none" />
  <line x1="15" y1="85" x2="85" y2="15" stroke="red" stroke-width="10" />
</svg>

Monday, September 30, 2024

Confronting New Ideas

A matter of Life and Death

I've given some thought to the meaning of death as it applies to those who have posted frequently on the internet. We often don't see people's writings in the order that they write them, and that means we can see new posts from them after they die.

Even without the internet this happens. I was in a bookstore recently and saw a book by Michael Crichton and asked the shopkeeper, “Isn't this his third posthumous book?” “Yeah…” he sheepishly responded. Someone is plainly raiding his basement for rejected works and projects that were far enough along that someone else can complete them and claim to have been co-author. His heirs are probably happy for the income, even if the publishing timeline is confusing to some readers.

Perhaps it's even possible for a prolific writer to write so much that readers never really perceive them as dead because they just keep seeing new stuff. So in what sense are they dead? Most readers were perhaps never going to meet them, and so in some sense—of observables—these writers are doing the same things that live ones are.

The elusive nature of intelligence

The big thing dead authors cannot do is the same thing GenAI/LLMs cannot do: competently respond to a new situation, question, or idea.

Oh, sure, the prompt topic might be something someone has speculated on before, so these engines can regurgitate that. [image of a lit lightbulb overlaid by a red circle with a red line through it, indicating 'no ideas'] Or the topic idea might be enough similar to a previous idea that the probabilities of guessing something acceptable to say based on just assuming it was really just an old idea is high enough that it escapes scrutiny that the topic idea was not really properly understood.

As I imagine—or perhaps just hope?—the makers of standardized tests like the SAT would tell you, there's more to competence than statistically guessing enough right answers to get a passing grade. The intent of such tests is not to say that if you know these things, you know the topic. It is to assume you have a mental model that lets you answer on any possible aspect of that model, and then to poke at enough randomly chosen places that you can hope to detect flaws in the model.

But these so-called AI technologies do not have a mental model. They just hope they've read enough standardized test preparation guides or pirated actual tests that they can fake their way. And since a lot of the things that they're claiming competence in are things that people have already written about, the technology manages to show promise—perhaps more promise than is warranted.

Real people build a mental model that allows them to confront not just the present but the future, while these technologies do no such planning. The models real people make probably hope the future is a lot like today, but people hopefully can't—and anyway shouldn't—get by on bluffing. Not the kind of bluffing today's “AI” tech does. That tech is not growing. It is dead. It has no plan for confronting a new idea other than to willfully ignore the significance of any real newness.

Just like my example of publication and death on the internet, the “AI” game is structured so it takes a long time for weakness to be recognized—unless just the right question is asked. And then, perhaps, the emperor will be seen clearly to have no clothes.

The dynamic nature of ethics

Which is also why it troubles me when I'm told that people are incorporating ethics. It troubles me because ethics itself has to be growing all the time, constantly asking itself, “How might I not be ethical?”

Ethics is not something you do on one day and are done with. Ethics is a continuing process, and one that needs its own models.

Worse, the need for ethics is easily buried under the sophistry of how things have always been done. The reason that bias and stereotypes and all that have survived as long as they have is that they do have practical value to someone, perhaps many people, even as they trod on the just due of others.

The sins of our society are deeply woven, and easily rediscovered even if superficial patches are added to hide them. Our whole culture is a kind of rationalization engine for doing things in biased ways based on stereotype information, and AI is an engine ready to reinforce that, operating at such high speed that it's hard to see happening, and in such volume that it's economically irresistible not to accept as good enough, no matter the risk of harm.

Where we're headed

Today's attempts at “AI” bring us face to face with stark questions about whether being smart is actually all that important, or whether faking it is good enough. And as long as you never put these things in situations where the difference matters, maybe the answer will seem to be that smart, in fact, doesn't matter. But…

There will be times when being smart really does matter, and I think we're teaching ourselves trust in the wrong technologies for those situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

This post began as a post on Mastodon. It has been edited to correct myriad typos and to clarify and expand various portions in subtle ways. Think of that post as a rough draft.

The graphic uses a lightbulb drawn by abacus.ai's gpt-4o engine with flux.1. The original prompt was “draw a simple black and white image that shows a silhouette of a person thinking up an idea, showing a lightbulb near their head” but then I removed the person from the picture and overlaid the circle and slash ‘by hand’ in Gimp.

Sunday, October 29, 2023

Technology's Ethical Two-Step

[B&W sketch of a man in a ballroom dance with a robot, dressed in a dress.]

1. Now. Delay incorporation of ethics. Let’s not muddy the waters in a way that holds back Progress.

2. Later. Deny incorporation of ethics. It’s too late. People have come to rely on things as they were built. It would be Disruptive to change now.


If you got value from this post, please “Share” it.

I've said things vaguely like this for a long time, but I packaged it up crisply like this in a post on Mastodon, for which this is a mirror.

Original Keywords were described as: “Ethics, Tech, Technology, Society. Presently very relevant to, but not exclusive to: AI, ML, LLM, GPT, ChatGPT.”

I made a later edit to this post to add a graphic, ironically generated by Abacus.AI's GPT-4o ChatLLM chatbot calling out to FLUX.1. The prompt was "make me a black and white image that depicts sketch of two entities engaged in a ballroom dance, one a man and his partner a robot."