Sunday, May 4, 2025

AI Users Bill of Rights

[A person sitting comfortably in an easy chair, protected by a force field that is holding numerous helpful robots from delivering food and other services.]

We are surrounded by too much helpful AI trying to insinuate itself into our lives. I would like the option of leaving “AI” tech turned off and invisible, though that's getting harder and harder.

I've drafted a draft version 1 of a bill of rights for humans who want the option to stay in control. Text in green is not part of the proposal. It is instead rationale or other metadata.

AI Users Bill of Rights
DRAFT, Version 1

  1. All use of “AI” features must be opt-in. No operating system or application may be delivered with “AI” defaultly enabled. Users must be allowed to select the option if they want it, but not penalized if they do not.

    Rationale:

    1. Part of human dignity is being allowed freedom of choice. An opt-out system is paternalistic.
    2. Some “AI” systems are not privacy friendly. If such systems are on by default until disabled, the privacy damage may be done by the time of opt-out.
    3. If the system is on by default, it's possible to claim that everyone has at least tried it and hence to over-hype the size of a user base, even to the point of fraudulently claiming users that are not real users.
  2. Enabling an “AI” requires a confirmation step. The options must be a simple “yes” or “no”.

    Rationale:

    1. It's easy to hit a button by accident that one does not understand, or to typo a command sequence. Asking explicitly means no user ends up in this new mode without realizing what has happened.
    2. It follows that the “no” may not be something like “not now” or any other variation that might seem to invite later system-initiated inquiry. Answering “no” should put the system or application back into the state of awaiting a user-initiated request.
  3. Giving permission to use an AI is not the same as giving permission to share the conversation or use it as training data. Each of these requires separate, affirmative, opt-in permissions.

    Rationale:

    1. If the metaphor is one of a private conversation among friends, one is entitled to exactly that—privacy and behavior on the part of the other party that is not exploitative.
    2. Not all “AI” agents in fact do violate privacy. By making these approvals explicit, there is a user-facing reminder for the ones that are more extractive that more use will be made of data than one may want.
  4. All buttons or command-sequences to enable “AI” must themselve be possible to disable or remove.

    Rationale:

    1. It may be possible for someone to enable “AI” without realizing it.
    2. It is too easy to enable “AI” as a typo. Providers of “AI” might even be tempted to place controls in places that encourage such typos.
  5. No application or system may put “AI” on the path to basic functionality. This is intended to be a layer above functionality that allows easier access to functionality in order to automate or speed up certain functions that might be slow or tedious to do manually.

    Rationale:

    1. Building this in to the basic functionality makes it hard to remove.
    2. Integrating it with basic functionality makes the basic functionality hard to test.
    3. If an “AI” is running erratically, it should be possible to isolate it for the purposes of debugging or testing.
    4. When analyzing situations forensically, this allows crisper attribution of blame.

With this, I hope those of us who choose to live in the ordinary human way, holding “AI” at bay, can do so comfortably.

 


Author's Notes:

If you got value from this post, please “Share” it.

The graphic was created at Abacus.ai using Claude Sonnet 3.7 and Flux 1.1 Ultra Pro, then cropped and scaled using Gimp.

No comments: