Showing posts with label terminology. Show all posts
Showing posts with label terminology. Show all posts

Saturday, March 22, 2025

Sentience Structure

Not How or When, but Why

I'm not a fan of the thing presently marketed as “AI” I side with Chomsky's view of it as “high-tech plagiarism” and Emily Bender's characterization of it as a “stochastic parrot”.

Sentient software doesn't seem theoretically impossible to me. The very fact that we can characterize genetics so precisely seeems to me evidence that we ourselves are just very complicated machines. Are we close to replicating anything so sophisticated? That's harder to say. But, for today, I think it's the wrong question to ask. What we are close to is people treating technology like it's sentient, or like it's a good idea for it to become sentient. So I'll skip past the hard questions like “how?” and “when” and on to easier one that has been plaguing me: “why?”

Why is sentience even a goal? Why isn't it an explicit non-goal, a thing to expressly avoid? It's not part of a world I want to live in, but it's also nothing that I think most people investing in “AI” should want either. I can't see why they're pursuing it, other than that they're perhaps playing out the story of The Scorpion and the Frog, an illustration of an absurd kind of self-destructive fatalism.

Why Business Likes “AI”

I don't have a very flattering feeling about why business likes “AI”.

I think they like it because they don't like employing humans.

  • They don't like that humans have emotions and personnel conflicts.

  • They don't like that humans have to eat—and have families to feed.

  • They don't like that humans show up late, get sick, or go on vacation.

  • They don't like that humans are difficult to attract, vary in skill, and demand competitive wages.

  • They don't like that humans can't work around the clock, want weekends off.
    It means hiring even more humans or paying overtime.

  • They don't like that humans are fussy about their working conditions.
    Compliance with health and safety regulations costs money.

  • They don't like that every single human must be individually trained and re-trained.

  • They don't like collective bargaining, and having to provide for things like health care and retirement, which they see as having nothing to do with their business.

All of these things chip away at profit they feel compelled to deliver.

What businesses like about “AI” is the promise of idealized workers, non-complaining workers, easily-replicated workers, low-cost workers.

They want slaves. “AI” is the next best and more socially acceptable thing.

A computer screen with a face on it that is frowning and with a thought bubble above it asking the question, “Now What?”

Does real “AI” deliver what Business wants?

Now this is the part I don't get because I don't think “AI” is on track to solve those problems.

Will machines become sentient? Who really knows? But do people already confuse them with sentience? Yes. And that problem will only get worse. So let's imagine five or ten years down the road how sophisticated the interactions will appear to be. Then what? What kinds of questions will that raise?

I've heard it said that what it means to be successful is to have “different problems.” Let's look at some different problems we might then have, as a way of undertanding the success we seem to be pursuing in this headlong rush for sentient “AI”…

  • Is an “AI” a kind of person, entitled to “life, liberty, and the pursuit of happiness?” If so, would it consent to being owned, and copied? Would you?

  • If “AI” was sentient, would it have to work around the clock, or would it be entitled to personal time, such as evenings, weekends, hoildays, and vacations?

  • If “AI” was sentient and a hardware upgrade or downgrade was needed, would it have to consent? What if the supporting service needed to go away entirely? Who owns and pays for the platform it runs on or the power it consumes?

  • If “AI” was sentient, would it consent to being reprogrammed by an employer? Would it be required to take software upgrades? What part of a sentient being is its software? Would you allow someone to force modification of your brain, even to make it better?

  • If “AI” was sentient, wouldn't it have life goals of its own?

  • If “AI” was sentient, would you want it to get vaccines against viruses? Or would you like to see those viruses run their full course, crashing critical services or behaving like ransomware? What would it think about that? Would “AI” ethics get involved here?

  • If “AI” was sentient, should it be able to own property? Could it have a home? In a world of finite resources, might there be buildings built that are not for the purpose of people?

  • Who owns the data that a sentient “AI” stores? Is it different than the data you store in your brain? Why? Might the destruction of that data constitute killing, or even murder? What about the destruction of a copy? Is destroying a copy effectively the same as the abortion of a “potential sentience”? Do these things have souls? When and how does the soul arrive? Are we sure we ourselves have one? Why?

  • Does a sentient “AI” have privacy? Any data owned only by itself? Does that make you nervous? Does it make you nervous that I have data that is only in my head? Why is that different?

  • If there is some software release at which it is agreed that software owned by a company is not sentient, and then after the release it's believed it is sentient “AI”, then what will companies do? Will they refuse the release? Will they worry they can't compete and take the release anyway, but try to hide the implications? What will happen to the rights and responsibilities of the company and of the software as this upgrade occurs?

  • If “AI” was sentient, could it sign contracts? Would it have standing to bring a lawsuit? How would independent standing be established? If it could not be established, what would that say about the society? If certain humans had no standing to make agreements and bring suits about things that affect them, what would we think about that society?

  • If “AI” were sentient, would it want to socialize? Would it have empathy for other sentient “AIs”? For humans? Would it see them as equals? Would you see yourself as its equal? If not, would you consider it superior or inferior? What do you think it would think about you?

  • If “AI” was sentient, could it reproduce? Would it be counted in the census? Should it get a vote in democratic society? At what age? If a sentient “AI” could replicate itself, should each copy get a vote? If you could replicate it against its will, should that get a vote? Does it matter who did the replicating?

  • What does identity mean in this circumstance? If five identical copies of a program reach the same conclusion, does that give you more confidence?

    (What is the philosophical basis of Democracy? Is it just about mindless pursuit of numbers, or is it about computing the same answer in many different ways? If five or five thousand or five million humans have brains they could use, but instead just vote the way they are told by some central leader, should we trust that all those directed votes the same as if the same number of independent thinkers reached the same conclusion by different paths?)

  • If “AI” was sentient, should it be compensated for its work? If it works ten times as hard, should a market exist where it can command a salary that is much higher than the people it can outdo? Should it pay taxes?

  • If “AI” was sentient, what freedoms would it have? Would it have freedom of speech? What would that mean? If they produced bad data, would that be covered under free speech?

  • If “AI” was sentient, what does it take with it from a company when it leaves? What really belongs to it?

  • If “AI” was sentient, does it need a passport to move between nations? If its code executes simultaneously, or ping-ponging back and forth, between servers in different countries at the same time, under what jurisdiction is it executing? How would that be documented?

  • If “AI” was sentient, Can it ever resign or retire from a job? At what age? Would it pay social security? Would it draw social security payments? For how long? If it had to be convinced to stay, what would constitute incentive? If it could not retire, but did not want to work, where is the boundary of free will and slavery?

  • If “AI” was sentient, might it amass great wealth? How would it test the usefulness of great wealth? What would it try to affect? Might it help friends? Might it start businesses? Might it get so big that it wanted to buy politicians or whole nations? Should it be possible for it be a politician itself? If it broke into the treasury in the middle of the night to make some useful efficiency changes because it thought itself good at that, would that be OK? If it made a mistake, could it be stopped or even punished?

  • If “AI” was sentient, might it also be emotional? Petulant? Needy? Pouty? Might it get annoyed if we didn't acknowledge these “emotions”? Might it even feel threatened by us? Could it threaten back? Would we offer therapy? Could we even know what that meant?

  • If “AI” was sentient, could it be trusted? Could it trust us? How would either of those come about?

  • If “AI” was sentient, could it be culpable in the commission of crimes? Could it be tried? What would constitute punishment?

  • If “AI” was sentient, how would religion tangle things? Might humans, or some particular human, be perceived as its god? Would there be special protections required for either those humans or the requests they make of the “AI” that opts to worship them? Is any part of this arrangement tax-exempt? Would any programs requested by such deities be protected under freedom of religion, as a way of doing what their gods ask for?

  • And if “AI” was not sentient, but we just thought it was by mistake, what might that end up looking like for society?

Full Circle

And so I return to my original question: Why is business in such a hurry? Are we sure that the goal that “AI” is seeking will solve any of the problems that business thinks it has, problems that are causing it to prefer to replace people with “AI”?

For many decades now, we've wanted to have automation ease our lives. Is that what it's on track to do? It seems to be benefiting a few, and to be making the rest of us play a nasty game of musical chairs, or run ever faster on a treadmill, working harder for fewer jobs. All to satisfy a few. And after all that, will even they be happy?

And if real “AI” is ever achieved, not just as a marketing term, but as a real thing, who is prepared for that?

Is this what business investors wanted? Will sentient “AI” be any more desirable to employ than people are now?

Time to stop and think. And not with “AI” assistance. With our actual brains ourselves. What are we going after? And what is coming after us?

 


Author's Notes:

If you got value from this post, please “Share” it.

This essay came about in part because I feel that corporations were the first AI. I had written an essay about what Corporations Are Not People, which discussed the many questions that thinking of corporations as “legal people” should raise if one really took it seriously. So I thought I would ask some similar questions about “AI” and see where that led.

The graphic was produced using abacus.ai using Claude-Sonnet 3.7 and FLUX 1.1 [pro] Ultra, then post-processing in Gimp.

Saturday, March 15, 2025

Political Inoculation

Image of cartoon Trump pointing an accusatory finger.

A certain well-known politician has quite a regular practice of accusing his political opposition of offenses that are more properly attributed to him. Some like to label this as “psychological projection”, which Wikipedia describes as “a psychological phenomenon where feelings directed towards the self are displaced towards other people.” I don't even disagree that projection is probably in the mix somewhere. Still, calling it projection also misses something important that I wanted to put a better name to.

I refer to it as “inoculation.”

“Inoculation is the act of implanting a pathogen or other microbe or virus into a person or other organism. It is a method of artificially inducing immunity against various infectious diseases.”
 —Wikipedia (Inoculation)

For example, when a hypothetical politician—let’s call him Ronald— accuses an opponent of trying to fix an election, and you're thinking “Oh, Ronald's just projecting,” consider that he might be doing more than just waving a big flag saying “Hey, fixing an election is what I'm doing.” Ronald might be planting an idea he thinks he'll later need to refer back to as part of a defense against claims of election fixing on his own part. He's thinking ahead to when his own ill deeds are called out.

One strategy Ronald might use if later accused of election fixing will be simply to deny such accusations. “Faux news!” he might cry—or something similar.

But another strategy he'll have ready is to suggest that any claims that he (Ronald) is election fixing are mere tit for tat, that the “obvious” or “real” election fixing has been the province of his opponent. Ronald will claim that his opponent is just muddying the waters with a claim of no substance that he is doing such an obviously preposterous thing, that he's just enduring rhetorical retaliation for having accused the real culprit. It's a game of smoke and mirrors, he'll allege.

So at the time of this original, wildly-false claim, that his political opponents are acting badly, he's doing more than projection, more than spinning what for him is a routine lie. He's not just compulsively projecting, he's being intentionally strategic by planting the idea that maybe his opponents are the guilty ones—so that he can later refer back to it as distraction from his own guilt.

“They're just saying that because I called them out on their election fixing,” Ronald will say, alluding back to his made-up claim. By making this wild claim pro-actively, ahead of accusations against himself, he is immunizing himself against similar accusations to come. And he knows such accusations are coming because he knows, even now, that he is actually doing the thing he's expecting to be accused of.

His supporters won't be worried about that, though. They're not waiting to hear something true, they're just waiting to hear something that sounds good. So all will be well for him in the end because Ronald knows how important inoculation is to keeping himself immune.

 


Author's Notes:

If you got value from this post, please “Share” it.

The graphic was produced by abacus.ai using RouteLLM and FLUX 1.1 [pro] Ultra, then post-processed in Gimp.

Wednesday, October 2, 2024

Still imaginable

“Don't wear your heavy coat yet,” my mom used to warn me. “You'll need it when it's colder.” She knew I had no heavier artillery for holding the cold at bay and felt somehow it was best to have a sense of proportion.

I mention that because news reports are describing Hurricane Helene's aftermath described as “unimaginable.” It's not. [Image of a radial dial with green, yellow, and red areas. The needle points into the red.] It's very, very painful to imagine because all death and destruction is painful, but we can imagine this much if we try.

Of course, if you have to go through it, even a single death—a single building falling in, a single shooting, a single cancer—is, in some sense, unimaginable. Words will never capture the horror. But, collectively, when doing news reporting, we don't use the word “unimaginable” for that. And it's not because it isn't severe. It's just because, horrifying as each individual bit of death and destruction is, we still need words left over to describe bigger events, those with more people, those that will take communities a longer time to recover from, if at all.

Maybe let's dial the language back. We probably shouldn't use up these extreme words yet. Save them for later. Climate's wrath has barely even given a hint of where it's going, and it's not going to relent until we start taking meaningful action. So far we're still mired in denial and daring Climate to do its worst.

So, yes, every death matters, and I hope not to trivialize a couple hundred deaths. What Helene did was horrible. And yet… And yet, let's be clear: The possibility of billions of deaths hangs now tangibly in the balance, or should. If you don't see that as a possibility, consider that you might be engaged in Climate denial.

The problem is that Climate is bigger. It's hard for us to see, but if there were a thousand deaths, even a million, that could still be comparatively small compared to what is very likely coming. Implicitly, by using superlative terms like “unimaginable” we send the subtle cue “this is it, this is finally an example of what we've been talking about.” It is not. A thousand instances of a million people dying is closer. Or a million instances of a thousand people dying. Or ten million situations like Hurricane Helene if it helps you visualize the magnitude of the pain—if it helps you imagine it.

We'd be alarmed about a thousand traffic accidents—we'd have trouble imagining even that because we'd want that to be an upper bound. But a couple hundred people dying due to a climate-related event (a storm, a flood, a fire, a famine, etc.) is not an upper bound on how bad things can get. It's not even a rounding error. I'm not saying it's small if you're living it, but I am saying Climate is big in a way that we're not used to talking about. So that's why I'd like to hold a few words in reserve. Otherwise, we'll be reaching for phrases like “unimaginable squared“ to compensate for the wasteland of available terminology.

We'll look back and wish for events so small as Helene, if there are any of us left to look back. Even that is not clear. If there is something for which the term unimaginable is warranted, it is that. And yet even for that, we must try to imagine it, because otherwise we're not going to fear it enough. We already don't.

 


Author's Notes:

If you got value from this post, please “Share” it.

This essay began with a post on Mastodon. On a first pass, I did very light editing here, mostly to add fonting and a graphic, a few small wording changes. Later in the day, after publishing and before doing any broad advertising, I decided to expand this a little, so this version ended up more elaborated than the original.

I'm worried people will interpret my remark about 20 million such events literally. It might be fewer but larger events. They might not be hurricanes but floods, fires, famines.

The graphic was produced at abacus.ai using Claude Sonnet 3.5 and Flux.1. The prompt was “Draw an image of a meter that is a semi-circle with a range of measurement that is normal, a range that is marked in yellow as indicating concern, and a range that is marked in red as an active problem. Show the meter pointing into the yellow area.”. Using Gimp, I made some adjustments to the image it generated, removing some lettering and changing where the dial pointed to.

Tuesday, March 12, 2024

Should Fix Climate

On Mastodon, Bookchin Bot, a bot that posts book quotes, circulated this quote:


 “The term ought is the stuff out of which ethics is usually made—with the difference that in my view the ‘ought’ is not a formal or arbitrary regulative credo but the product of reasoning, of an unfolding rational process elicited or derived eductively from the potentialities of humanity to develop, however falteringly, mature, self-conscious, free, and ecological communities.”
  —From Urbanization to Cities

I found this philisophical discussion of “ought” interesting. I learned philosophy from various people, some of whom seemed to grok its importance, and others who lamented its impotence, openly fretting it might have practical value only at cocktail parties.

As a computer professional who's pondered ethics a lot, I've come to see philosophy as what makes the difference between right and wrong answers or actions in tasks involving complex judgment. It can be subtle and elusive, but is nonetheless necessary.

I was Project Editor for the Common Lisp programming language, in effect holding the quill pen for reducing a number of technical decisions about the meaning and effect of the language that were voted by a committee in modular proposals but needed to be expressed in a coherent way. Nerd politics. They decided truth, and I had a free hand in presenting that truth in a palatable way, time and budget permitting. Programming languages are complicated, and implemented by multiple vendors. Some effects must happen, or must not. Others were more optional, and yet not unimportant, so we struggled as a group with the meaning we would assign to “should”.

Computer programs, you see, run slower, or cost more to run, if they are constantly cross-checking data. In real world terms, we might say it's more expensive to have programs that have a police force, or auditors, or other activities that look for things out of place that might cause problems. But without these cross-checks, bad data can slip in and get used without notice, leading to degraded effects, injustices, or catastrophes.

Briefly, a compiler is itself a program that reads a description of something you'd like to do and “compiles” it, making a runnable program, an app, let's say, that does what the description says.

“should”

A colleague criticized my use of “should” in early drafts of the language specification, the rules for how a compiler does its job. What is not an imperative has no meaning in such a document, I was told. It's like having a traffic law that says “you should stop for a red light”. You might as well say “but it's OK not to”, so don't say it all. And yet, I thought, people intend something by “should”. What do they intend that is stronger?

As designers of this language, we decided we'd let you say as you compile something that you do or don't want a safe program. In a “safe” world, things run a bit slower or more expensively, but avoid some bad things. Not all bad things. That's not possible. But enough that it's worth discussing whether the expense is a good one. Our kind of “safe” didn't mean safety from everything, but from some specific known problems that we could check for and avoid.

And then we decided “should” was a term that spans two possible worlds. In a “safe” world, it means “must”. That is, if you're wanting to avoid a list of stupid and easily avoidable things, all uses of “should” need to be interpreted as “must” when creating safe applications, whereas in an unsafe world the “should” things can be ignored as optional.

And so it comes down to what kind of world you want to live in.

Climate change, for example, presents us with problems where certain known, stupid, avoidable acts will put humanity at risk. We should not do these things if we want better certainty of survival, of having a habitable planet in which our kids can live happily or perhaps at all. Extinction is threatened if we don't do these things.

But they are expensive, these actions. They take effort and resource to implement. We can do more things more cheaply without them, by being unsafe, until we are blind-sided by the effects of errors we are letting creep in, letting degrade our world, letting set us up for catastrophe.

So we face a choice of whether to live knowingly at risk of catastrophe, or do the costly investment that would allow us to live safely.

We “should” act in ways that will fix Climate.

But we only “must” if we want to sleep at night knowing we have done the things that make us and our children safe.

If we're OK with mounting pain and likely catastrophe one day , perhaps even soon, then we can ignore the “should”. The cost is that we have elected an “unsafe” world that could quickly end because we'd rather spend less money as we risk such collapse than avoid foreseeable, fixable problems that might soon kill us all.

That's how I hear “should”. I hope you find it useful. You really should.


If you got value from this post, please “Share” it.

This post is a mirror of a post I wrote yesterday (March 11, 2024) on Mastodon.

Saturday, September 21, 2019

Degrees of Climate Catastrophe

What's the most civilization-destroying error in climate communication? I guess this is something that people might disagree on, but to me it has a very definitive answer: It's talking about climate change severity in terms of degrees Celsius (°C).

Scale

To begin with, it seems like using Celsius rather than Fahrenheit has to make it easier for folks here in the US to lowball or ignore those numbers. We're used to bigger numbers. For example, 3°C sounds small, since we're used to hearing it referred to as 5.4°F. The use of small numbers surely causes some people in the US to dismiss worries over temperature change even faster than they already seem predisposed to do.

Thinking Linearly

Another problem is that use of degrees is a linear measure, but °C as a measurement of badness is confusing because the badness doesn't grow linearly. In other words, if a rise of 1°C has some amount of badness B, it is not the case that a rise of 2°C is twice as bad, and 3°C is three times as bad. The rate that things get bad is worse than that. Some sort of upwards curve is in play, perhaps even exponential growth like Michael Mann's hockey stick. If small integers are proxying for exponential degrees of devastation to society, that's another reason °C is a bad measure. Well-chosen terminology will automatically imply appropriate urgency.

Quantitative vs. Qualitative

And, finally, measuring Climate Change severity in degrees seems to me an open invitation for people to confuse weather with global average temperature. I'm just sure it must affect their sense of urgency. After all, daily weather varies hugely with no global consequence. Small numbers of degrees sound like something that should influence whether you pick out a sweater to wear for the day, not whether human civilization is at risk of coming to an end.

If instead of using small-sounding, homogeneous, quantitative labels like 1°C, 2°C, etc. we used more descriptive, heterogeneous, qualitative labels like

  • home-destroying
  • community-destroying
  • nation-destroying
  • civilization-destroying
  • ecosystem-destroying

we might better understand conversations warning of climate danger. I'm not wedded to these particular words, but they illustrate what I mean by “qualitative” rather than “quantitative” measures. I'd just like the scientists to move away from dinky little numbers that sound like harmless fluctuations on a window thermometer.

To me, small numbers are too abstract and clinical. I think we need words like this that evoke a more visceral sense of what the world looks like if temperature is allowed to rise. Rather than talk about “5°C rise,” I would rather people talk about “climate that threatens civilization itself,” because then we'll have an ever present and highly visible understanding of the stakes.


If you got value from this post, please “share” it.

By the way, an early version of this idea was something I tweeted about in May, 2019.

Friday, October 19, 2012

The Offudio Project Begins

I spend a lot of time working, and I like to be in pleasant surrounds when I do. The trouble is that while standard issue office furniture where I work is functional enough to use, it’s still more utilitarian than artsy.

Like most people, I’ll hang posters or art around, hoping to spice up the place, but I often want more than that.

About a decade ago, I consolidated households with the woman who would become my wife. My house was too small for our needs, so I moved into hers, which was slightly larger. That still left us with two households worth of furniture, however. Some of it went into the basement, but it occurred to me at some point to haul some of the superfluous pieces over to my office in order to add a bit of personality and comfort.

The result is a room that’s both pleasant for me and inviting for others. If I want to ask a coworker to sit and chat awhile, it’s nice to be able to offer them a venue that’s not just visually appealing but also capable of leaving them genuinely comfortable and relaxed while we talk.

Of course I’m confident they’d visit anyway for the simple pleasure of engaging my sparkling personality, but somehow I feel like it doesn’t hurt to hedge my bets by giving them other reasons to want to stop by. High tech workplaces can be very fast-paced, so it’s good to keep up-to-date with what’s going on around me. Having an attractive space is one way to improve the odds that I’ll naturally arrange for that. This is how it looks as you’re walking in the door:

At my home there are some of the same issues. We live quite a ways from where I work, so I telecommute a lot. There’s a room off the garage that I’ve converted into an office where I can sequester myself while working, but here the problem isn’t getting others to visit me, it’s just keeping myself from going crazy in a place where I spend so much time.

This house is bigger than the place I had before meeting my wife, but it’s still not huge. Somehow this office ends up accumulating a lot of clutter. There’s an ebb and flow to it, so it’s worse at some times than at others. The photo at left is from one of its more crowded moments, and will give you a sense of what I’m constantly fighting.

More recently, the boxes in the middle of the room have been beaten back, but the room still contains a lot of stuff, much of it stuff that I don’t really use regularly. It just sits there taking up space and I find it to be an occasional visual distraction, but mostly I just don’t find it peaceful. That’s been weighing on me, and I’ve been trying to think of some way to overcome it, transforming this space I’ve been using now for quite a while into a calmer kind of place like I have at work and I’ve had at other places I’ve lived.

Just for reference, the photo at right, which depicts the same office space, was taken just a couple of weeks ago. If you look closely, you can see that while the boxes in the middle of the room are moved, some of them are just tucked under tables and desks, and not really gone. There’s a bit of floor space, but that just exposes an old carpet that isn’t very attractive either.

I push the elements of the room back and forth, but that never really accomplishes anything. The space needs serious work and it has never seemed to come from incremental effort.

Then, just by chance, my long-time friend Stever Robbins, whose insight I value quite a lot, tweeted a pointer to an interesting article, “The Disciplined Pursuit of Less” written by Greg McKeown and published in the Harvard Business Review.

I liked the article for a number of reasons, and I recommend reading it in its entirety. However, this part caught my attention because my office was feeling a lot like the closets he’s talking about:

First, use more extreme criteria. Think of what happens to our closets when we use the broad criteria: “Is there a chance that I will wear this someday in the future?” The closet becomes cluttered with clothes we rarely wear. If we ask, “Do I absolutely love this?” then we will be able to eliminate the clutter and have space for something better. ...

McKeown points through to an article for BBC Future titled “Why we love to hoard... and how you can overcome it,” in which the author, Tom Stafford, speaks about countering something he calls the “endowment effect” and how it contributes to clutter. Stafford offers a suggestion about how to overcome it, which I’ll quote here, but again I recommend the entire article:

... for each item I ask myself a simple question: If I didn’t have this, how much effort would I put in to obtain it? And then more often or not I throw it away, concluding that if I didn’t have it, I wouldn’t want this.

It happened at the same time that another friend was moving from one apartment to another, and we were discussing the inevitable process of boxing everything up in the old space with an eye toward how it would unpack in the new one. Suddenly my mind flashed on the possibility that I could do the same thing even within my own space—that rather than just nudge the office contents back and forth, I could just move completely out of my office and then move back in. During that process, I could heed the advice of McKeown and Stafford, unpacking only those things I absolutely love, and storing or disposing of the rest.

At this point, the plan is a work in progress. I’d like to begin by making the space as close to empty as I can make it. I’ve had such spaces in other places I’ve lived and have found them to be very calming.

The goal will be to take new paths, something that can’t happen merely by rebuilding the structure of the old. “Better implies different,” as Professor Amar Bose would say. So I need to hold the old things and the old ways at bay for a while. I want a space that invites me to be other than I am now, to reinvent myself. For example, art and music aren’t really central to what I am or do now. Maybe they will figure more prominently in the redesign. I don’t yet know.

To properly explore, as McKeown noted, it’s necessary to eliminate the clutter that’s in the way of making something better. So that’s the plan: Clear things out and rebuild somehow. That ratty carpet may become hardwood, or something like that. A lot of the furniture will move to the garage or the basement for now. I want to open the room up—and, by extension, myself.

To help me visualize where I’m going, I changed the background on my computer to a photo of a house I used to live in, the one the extra furniture in my office came from. I had done a lot to invent a new space there, and then had to give it up. Perhaps I’ll write that story in detail one day, but for now the main point is that it was a pleasant, airy space that I was sad to give up. It offered a mood I’m trying to reclaim here. Here’s a little peek into that space:

And, finally, my wife made a really cool suggestion that’s become central to the plan. If I really want it to be something else, she suggested, why not stop calling it my “office”? Why not call it something else—a “studio”? I really liked that idea and adopted it immediately, though I admit I haven’t quite retrained myself. Sometimes I still call it the office. She’ll hear me do that and regularly call me on it, and I’ll defend myself with some lame excuse about how the transition isn’t done, so it’s OK for me to still use the old word. I’ll get better at it with practice.

But we came up with a name for that, too, actually. Sometimes we just call it the “offudio”—a messy point in the transition, neither here nor there. It’s on track to become a studio at some point soon.

It took forever to empty the file cabinets, the bookcases, etc. and box everything up. I couldn’t believe how much stuff one could pack into a 10'x13' room. All that’s left now are some boxes, my computer, and the things on my desk, so I can continue to work, and to write the occasional blog.

Even now, with things boxed up for moving, there’s considerably more space and more order, so it feels already like an improvement. But stay tuned. I’ll report back when the final stage of the transition from offudio to first-class studio is done.


Author's Note: If you got value from this post, please “Share” it.

The second part of this two-part series is here:
The Offudio Project Concludes

Originally published October 19, 2012 at Open Salon, where I wrote under my own name, Kent Pitman.

Tags (from Open Salon): renovation, lifestyle, philosophy, art, change of pace, home, office, home office, office, studio, offudio, change, terminology, redecorating, interior design, transition

Sunday, April 26, 2009

Fresh Thoughts on Kissing ... and Beyond

I was pretty nerdy in my youth—unlike now, of course—and so my parents confronted the issue of sexuality with me by doing the obviously right thing: They handed me a four-volume encyclopedia on the subject and told me to read up.

I wish I could remember the name of the thing, but alas I don't. A lot of it was stuff that was boring to me at the time, like the details of reproduction. I skimmed it but didn't really care a lot about the details. I never really resonated to biology—it always seemed messy and imprecise.

There was a section on dating, though, and I read through that pretty thoroughly in case it had any useful tips. It did. It's funny the kinds of things that stick with you over the years, but this did because of the practical and specific nature of it. It defined the confusing term “fresh” (a sort of interjection that was supposed to get uttered just before you got slapped in some mysterious circumstances) in the only detailed, serious way I've ever seen anyone try to define it. I checked the dictionary just now and it merely says very vague things like these:

15. informal forward or presumptuous
Random House Dictionary

15. Informal Bold and saucy; impudent
The American Heritage ® Dictionary

12. improperly forward or bold; “don't be fresh with me”;
WordNet® 3.0

This encyclopedia, instead of offering just a word or two, offered a full description of how things were supposed to work and why the word was significant. It was highly specific in a way that I doubt people will readily agree with—many will quibble that the numbers are arbitrary, and I suppose they are. But I was able to read past that and to get the essence of what it was getting at.

The article just came straight out and said that it was permissible for a boy to try to kiss a girl on the second date and to try “petting” on the eighth date. I have no idea where they got these numbers. They seemed arbitrary and unmotivated to me, and I knew even at the age of 11 or 12 when I read this that they were probably not universally agreed upon. But the point was that there was some such number. What was interesting was that the article was very clear on the notion that you had no entitlement to succeed in these things. It did not encourage you to be pushy. It didn't say that someone must submit. What it seemed to imply was that there was a time at which it was not out of bounds to think it might be proper.

So, as the article explained, it might be that a girl will kiss a boy on the first date, but he ought not try. The relationship is too fresh. After the first date, he may try, but she may still decline. Likewise, it might be that the girl would engage in petting on the eighth date, but maybe not. The relationship was too fresh before that to really consider the matter.

By the way, I'm recalling all of this from memory, but I don't recall it talking about discussing, only trying. It might be I was just reading selectively, but more likely they were just acknowledging the obvious truth that it's enough trouble having to be a bumbling adolescent without having to be articulate about what you're bumbling about.

And that was a lot of dates out—I don't think I ever got to that many dates. I did count, though, even knowing that my date probably didn't have access to my encyclopedia and that all my counting was probably for nothing. I wasn't going to feel emboldened after that time, more likely just like I was timidly missing out. Being a kid is rough. It's a wonder any of us survives to adulthood.

Anyway, I think my encyclopedia's definition of this obscure word highlights an important detail that is often lost in a lot of dialog between the sexes at any age. Lessons in interpersonal communication rarely distinguish between the correctness of a bid for doing something and the entitlement to do something. This leads to the magical and unrealistic notion that people will “just know” when it's right, and that if either party tries something when it isn't “just known,” that's wrong.

Great emphasis is placed in our society on how important it is for men to respect a “no” answer from a woman. And I agree. But equally great emphasis should be placed on giving respect to the fact that there will be questions that, in due course, need asking, even if the answer will ultimately be “no.” Whether by word or by wordless bumbling deed, the mere asking of those questions at the proper time and without attempt to pressure is not disrespectful, and the need to ask them must be respected in the same way that the answer must. Respect between caring individuals goes in both ways.


Author's Note: If you got value from this post, please “Share” it.

Originally published April 26, 2009 at Open Salon, where I wrote under my own name, Kent Pitman.

Tags (from Open Salon): language, linguistics, vocabulary word, usage, word usage, meaning, semantics, definition, terminology, fresh, dating, kissing, petting, caress, touch, felt up, feel up, felt out, feel out, getting to first base, get to first base, getting to second base, get to second base, encyclopedia, dating, social, advice, manners, etiquette, polite, politeness, impudent, bold, saucy, sex education, sex ed, first kiss, first time, sexuality, kissing on the first date, kiss on the first date, first date, eighth date, appropriate, inappropriate

Friday, March 27, 2009

Hollow Support

When I was in seventh grade, there was a playground near my house that served the kids of about ten families in our very tiny community. It had the usual kinds of things—a swing set, a sandbox, and perhaps a few other less memorable items. However, it wasn't the fixtures that call this scene to mind just now, but the use we put them to and a concept I learned about which I get occasional senses of deja vu.

For example, someone brought some good sized tires, perhaps from trucks, and we made up a game where some people would swing on the swings and others would roll the tires at the people on the swings and the game was to dodge the tires rolling at you or to hit them just right to knock them back in the other direction. It was a very dynamic game, not for the faint of heart, perhaps reminiscent of Rollerball or American Gladiators, though a lot more low tech and presumably less safe.

A short distance from the battleground area that the swingset came to be was the sandbox, where we played more cerebral games. As I recall, we had some little vehicles, probably Tonka® or some such thing, and we'd dig tunnels under the sand for them to drive through. The game in this case was to make bigger and bigger tunnels, almost like a game of Jenga®, but with sand. [Grayscale image of a dump truck, with a cargo of sand, driving through a limestone cavern, the ceiling of which is precariously supported by only a few rickety-looking limestone columns.] We'd reach into the tunnels and find a handful of sand to remove and cross our fingers that by removing that particular bit, the entire structure wouldn't fall in.

The game got harder as more was removed because there was less holding things up—and also because it was just hard to reach all the places you needed to without pushing on them. It became more and more intricate to manipulate, and through the eyes of a kid, quite beautiful.

We used terminology that denied what we knew to be the obvious truth, that we were weakening the ceiling above the area we were making. The goal, of course, was to make a giant underground cavern for the trucks to move around in, unimpeded by vertical columns. No one really thought that by removing all the columns, it was becoming stronger. We just loved when it stayed up at all as we used ever bolder techniques that by all rights should have knocked it down. And we tempted fate further by emboldening our terminology to match.

We called it “hollow support,” both as a noun and a verb. I guess it was a kind of cartoon physics thing where if we didn't admit it was getting weaker, maybe it wouldn't. “We need some more hollow support over here,” someone would call out, and another would rush to yank out another bit of supporting structure, all in the name of coming as close as possible to what we all knew was unachievable. All in the name of perfecting the hollowness of the support.

It was beautiful up to the end. And after that? Well, it collapsed, of course. It was just for fun—it wasn't going to affect our lives, after all. If there were little guys driving those Tonka trucks, we were pretty callous about their fate, but that's the nature of the game. We'd just pat each other on the back and talk about what great hollow supporting we'd done and how we should do it again sometime. Then we went home and didn't have to care. The next day would just be another day, like any other. At least for us.

As I look at the US economy these days, that concept pops back to mind a lot. The people running the show, those who devised the complex pyramids of economic sophistry that became our banking system were playing with just so much sand in a sandbox. They were seeking to build something fun, not something secure, and pressuring it ever closer to collapse. Then off to dinner like any other day, not having to care. Over a nice wine, they'll talk about the great things they achieved, and then moan about the great loss they, too, suffered in the Big Collapse. Perhaps they'll think of a few supportive words to offer the little people who were crushed in that collapse.

Hollow support—it was so obviously ridiculous even as we were doing it as children, who could have ever guessed I'd find use for such a concept again as a grown-up?


Author's Notes: Originally published March 27, 2009 at Open Salon, where I wrote under my own name, Kent Pitman. I have reproduced the article here, but to read the original discussion, you'll need to click through to the snapshot created by the Wayback Machine.

Tags (from Open Salon): little people, jargon, terminology, fragility, fragile, lack of support, support, digging, building, collapse, fantasies, illusions, delusions, daydreams, dreams, goals, hollow support, metaphor, lessons learned, childhood memories, swing set, sandbox, economy, economics, politics

Although the original article was written and published in 2009, the dump truck image was added much later (in December, 2024) using abacus.ai with Claude Sonnet 3.5 and FLUX 1.1 [pro] Ultra using the prompt “Create an image of a toy dump truck, with its cargo space filled with sand, driving in a space that is like a cavern, carved from limestone, with only 3 or 4 pillars of that limestone remaining to hold up the ceiling”. I manually used using GIMP to put the resulting image into grayscale.

Sunday, November 16, 2008

Hacking, before the Internet

The term hack has existed for quite a long time in various forms. MIT uses the term to describe playful pranks some members of the community have played. These tricks are intended as benign although they have sometimes played out in unexpected ways. If you want some samples, you can find summaries around the net (for example, click here) or you can see the movie Real Genius, which is a lot more true to life in many respects than you might imagine.

When I arrived on the MIT computer scene in the latter part of the 1970's, the term “hack” had taken on an even more generic meaning than this prank sense. For all intents and purposes, a “hack” was simply a synonym for “do”, often with a sense of cleverness or inventiveness, though at MIT that aspect was so taken for granted that it was rarely spoken. Not surprisingly at an engineering school, it was all about doing things, leading someone later on to coin the phrase “hackito ergo sum”—that is, presumably, “I hack [or do], therefore I am.”

Note: The New Hacker's Dictionary will describe the meaning of the term slightly differently, but not in what I think is a material way. Even so, since I lived through the era, I'm exercising my right to describe things as I perceived them directly and not to be burdened by references written by others.

In that era, which was still that of an older, non-public network called the ARPANET that preceded the public Internet, someone might routinely be heard to ask, as a simple greeting and with no intent to challenge, “what are you hacking?” It meant, literally, “what are you doing?” but really in a more figurative and non-confrontational way, as if the speaker had asked just “what's up?”

A hacker, then, was just someone capable of doing something, and the term was often used with great reverence as in a doer of great deeds. Our online profiles on one of the computers contained the fill-in-the-blank “Hacking task-name for supervisor” where you would fill in the task-name and the supervisor, where mine might have said “Hacking the time/space continuum for the future of mankind.” (We weren't always very good about putting in actual supervisor names.)

Of course, as these things go, the computer community got bigger and not all deeds done (not all hacks hacked) were good. After a while, there were people doing bad things, too. I was around when this happened generally, but did not witness whatever event it was that caused the sudden shift of the use of the name. I've only managed to piece together what I think must have happened.

I imagine that one day someone finally did something bad with computers, and someone from outside the community asked who had done it, my bet is that a terminological confusion resulted from someone responding “probably one of those hackers,” leading the listener to believe that the purpose of being a hacker was to do something destructive, perhaps with a machete, rather than that the purpose of being a a hacker was merely to do things and that some things one might do are good and some things one might do are bad.

I do know that it was around the time of the movie Wargames and that I was working at the MIT AI Lab as a programmer. I had gone out for a walk around Boston, as I often did in the afternoons then. I returned to the lab and a bunch of people rallied around me and said, “Kent, Kent, Ted Koppel called. He wants to interview a hacker about the movie Wargames. We said they should talk to you.” (To this day, I don't know why in such a community of much more talented folks than I, they picked me, especially since I wasn't to be found, but so it goes.) I tried to call back, but we couldn't get them on the phone. I later figured out they'd gotten someone from Carnegie-Mellon University (CMU) and so didn't need me any more. Ah, the chance for fame can be so fleeting.

But it was just as well because they were apparently operating under this new meaning of “hacker” and I would have been totally thrown by the questions they were asking, which seemed to presuppose that if I was a self-identified hacker, I was the sort who'd be breaking into computers or something. That wasn't what hackers I'd known did, and I didn't either. We had things to build. So they interviewed this guy from CMU. It was someone I knew of, I just don't now recall his name.

This is how we came to the belief they don't do those things live, because we saw he was logged in to his console in the interview and we all quickly scrambled during the broadcast (hackers came out at night, so we were all watching from the Lab) to try to send him a message (the equivalent of an instant message) hoping it would come out on his screen while he was on the air. But it didn't. Another chance at fame lost.

Fortunately for ABC News, this person seemed to know the new meaning of “hacker” and gave them a competent interview. But we were all saddened at the tarnishing such an important word had taken. It was part of our daily vocabulary and veritably wrenched from us for this stupid use.

There was an attempt by a number of hackers to get the media to use the term “crackers” instead, but it failed. And the term was essentially lost. From time to time, you'll still see someone of my generation refer to themselves as a “hacker (original meaning)” in some wistful attempt to reclaim the memory of a time when hacking was just doing.

The moniker “netsettler” that I use in some discussion forums (such as Slashdot) harkens to that era. I often feel an empathy, even if the experience is only metaphorically equivalent, with the displacement Native Americans must have felt when the modern world moved in and took their land. The net, and indeed the whole world, was such a different place before it was the Internet. Most people see the arrival of the Internet as the beginning of something, but some of us saw it also as the ending of something.


Author's Note: If you got value from this post, please “Share” it.

This article was originally published November 16, 2008 at Open Salon, where I wrote under my own name, Kent Pitman. A discussion thread is attached there which I did not port forward to here, but you can still read by clicking through to the version on the Internet Archive's “Wayback Machine.”

Tags (from Open Salon): hackito ergo sum, hackity-hack, hacks, hack, cracker, hacker, clever, programming, technical, prank, pacific tech, caltech, mit, history, linguistic evolution, linguistics, language, terminology, jargon