Sunday, May 18, 2025

Unsupervised AI Children

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Recent “AI” hype

Since the introduction of the Large Language Model (LLM), the pace of new tools and technologies has been breathtaking. Those who are not producing such tech are scrambling to figure out how to use it. Literally every day there's something new.

Against this backdrop, Google has recently announced a technology it calls AlphaEvolve, which it summarizes as “a Gemini-powered coding agent for designing advanced algorithms” According to one of its marketing pages:

“Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.»

Early Analysis

The effects of such new technologies are hard to predict, but let's start what's already been written.

In an article in ars technica, tech reporter Ryan Whitwam says of the tech:

«When you talk to Gemini, there is always a risk of hallucination, where the AI makes up details due to the non-deterministic nature of the underlying technology. AlphaEvolve uses an interesting approach to increase its accuracy when handling complex algorithmic problems.»

It's interesting to note that I found this commentary by Whitwam from AlphaEvolve's Wikipedia page, which had already re-summarized what he said as this (bold mine to establish a specific focus):

«its architecture allows it to evaluate code programmatically, reducing reliance on human input and mitigating risks such as hallucinations common in standard LLM outputs.»

Whitwam actually hadn't actually said “mitigating risks,” though he may have meant it. His more precise language, “improving accuracy” speaks to a much narrower goal of specific optimization of modeled algorithms, and not to the broader area of risk. These might seem the same, but I don't think they are.

To me—and I'm not a formal expert, just someone who's spent a lifetime thinking about computer tech ethics informally—risk modeling has to include a lot of other things, but most specifically questions of how well the chosen model really captures the real problem to be solved. LLMs give the stagecraft illusion of speaking fluidly about the world itself in natural language terms, and that creates all kinds of risks of simple misunderstanding between people because of the chosen language, as well as failures to capture all parts of the world in the model.

Old ideas dressed up in a new suit

In a post about this tech on LinkedIn, my very thoughtful and rigorously meticulous friend David Reed writes:

«30 years ago, there was a craze in computing about Evolutionary Algorithms. That is, codes that were generated by random modification of the source code structure and tested against an “environment” which was a validation test. It was a heuristic search over source code variations against a “quality” or “performance” measure. Nothing new here at all, IMO, except it is called “AI” now.»

I admit haven't looked at the tech in detail, but I trust Reed's assertion that the current interation of the tech is primarily less grandiose than Google's hype suggests—at least for now.

But that doesn't mean more isn't coming. And by more, I don't necessarily mean smarter. But I do mean that it will be irresistible for technologists to turn this tech upon itself and try exactly what Google sounds like it's wanting to claim here: that unsupervised evolutionary learning will soon mean “AI”—in the ‘person’ of LLMs—can think and evolve on their own.

Personally, I'm confused by why people even see it as a good goal, as I discussed in my essay Sentience Structure. You can read that essay if you want the detail, so I won't belabor that point here. I guess it comes down to some combination of a kind of euphoria that some people have over just doing something new combined with a serious commercial pressure to be the one who invents the next killer app.

I just hope it's not literally that—an app that's a killer.

Bootstrapping analysis by analogy

In areas of new thought, I reason by analogy to situations of similar structure in order to derive some sense of what to expect, by observing what happens in analogy space and then projecting back into the real world to what might happen with the analogously situated artifacts. Coincidentally, it's a technique I learned from a paper (MIT AIM-520) written by Pat Winston, head of the MIT AI lab back when I was studying and working there long ago — when what we called “AI” was something different entirely.

Survey of potential analogy spaces

Capitalism

I see capitalism as an optimization engine. But any optimization engine requires boundary conditions in order to not crank out nonsensical solutions. Optimization engines are not "smart" but they do a thing that can be a useful tool in achieving smart behavior.

Adam Smith, who some call the father of modern capitalism, suggested that if you want morality in capitalism, you must encode it in law, that the engine of capitalism will not find it on its own. He predicted that absent such encoding, capitalists would tend toward being tyrants.

Raising Children

Children are much smarter than some people give them credit for. We sometimes think of kids getting smarter with age or education, but really they gain knowledge and context and, eventually, we hope, empathy. Young children can do brilliant but horrifying things, things that might hurt themselves or others, things we might call sociopathic in adults, for lack of understanding of context and consequence. We try to watch over them as they grow up, helping them grow out of this.

It's why we try kids differently than adults sometimes in court. They may fail to understand the consequences of their actions.

Presuppositions

We in the general public, the existing and future customers of “AI” are being trained by use of tools like ChatGPT to think of an “AI” as something civil because the conversations we have with them are civil. But with this new tech, all bets are off. It's just going to want to find a shorter path to the goal.

LLM technology has no model of the world at all. It is able to parrot things, to summarize things, to recombine and reformat things, and a few other interesting tricks that combine to give some truly dazzling effects. But it does not know things. Still, for this discussion, let's even suspend disbelief and assume that there is some degree of modeling going on in this new chapter of “AI” if the system thinks it can improve its score.

Raising “AI” Children

Capitalism is an example of something that vaguely models the world by assigning dollar values to a great many things. But many find ourselves routinely frustrated by capitalism because it seems to behave sociopathically. Capitalists want to keep mining oil when it's clear that it is going to drive our species extinct, for example. But it's profitable. In other words, the model says this is a better score because the model is monetary. It doesn't measure safety, happiness (or cruelty), sustainability, or a host of other factors unless a dollar score is put on those. The outcome is brutal.

My 2009 essay Fiduciary Duty vs The Three Laws of Robotics discusses in detail why this behavior by corporations is not accidental. But the essence of it is that businesses do the same thing that sociopaths do: they operate without empathy, focusing single-mindedly on themselves and their profit. In people, we call that sociopathy. Since corporations are sometimes called “legal people,” I make the case in the essay that corporations are also “legal sociopaths.”

[An image of a construction vehicle operated by a robot. There is a scooper attachment on the front of the vehicle that has scooped up several children. The vehicle is at the edge of a cliff and seems at risk of the robot accidentally or intentionally dropping the children over the edge.]

Young children growing up tend to be very self-focused, too. They can be cruel to one another in play, and grownups need to watch over them to make sure that appropriate boundaries are placed on them. A sense of ethics and personal responsibility does not come overnight, but a huge amount of energy goes into supervising kids before turning them loose on the world.

And so we come to AIs. There is no reason to suspect that they will perform any differently. They need these boundary conditions, these rules of manners and ethics, a sense of personal stake in the world, a sense of relation to others, a reason not to behave cruelly to people. The plan I'm hearing described, however, falls short of that. And that scares me.

I imagine they think this can come later. But this is part of the dance I have come to refer to as Technology's Ethical Two-Step. It has two parts. In the first part, ethics is seen as premature and gets delayed. In the second part, ethics is seen as too late to add retroactively. Some nations have done better than others at regulating emerging technology. The US is not a good example of that. Ethics is something that's seen as spoiling people's fun. Sadly, though, an absence of ethics can spoil more than that.

Intelligence vs Empathy

More intelligence does not imply more empathy. It doesn't even imply empathy at all.

Empathy is something you're wired for, or that you're taught. But “AI” is not wired for it and not taught it. As Adam Smith warned, we must build it in. We should not expect it to be discovered. We need to require it in law and then productively enforce that law, or we should not give it the benefit of the doubt.

Intelligence without empathy ends up just being oblivious, callous, cruel, sociopathic, evil. We need to build “AI” differently, or we need to be far more nervous and defensive about what we expect “AI” that is a product of self-directed learning to do.

Unsupervised AI Children—what could possibly go wrong?

The “AI” tech we are making right now are children, and the suggestion we're now seeing is that they be left unsupervised. That doesn't work for kids, but at least we don't give kids control of our critical systems. The urgency here is far greater because of the accelerated way that these things are finding themselves in mission-critical situations.

 


Author's Notes:

If you got value from this post, please “Share” it.

You may also enjoy these other essays by me on related topics:

The graphic was created at abacus.ai using RouteLLM (which referred me to GPT-4.1) and rendered by GPT Image. I did post-processing in Gimp to add color and adjust brightness in places.

Friday, May 16, 2025

Must We Pretend?

An article at countercurrents.org said this recently:

«A new study has warned that if global temperatures rise more than 1.5°C, significant crop diversity could be lost in many regions»
Global Warming and Food Security: The Impact on Crop Diversity

Are we not sufficiently at the 1.5°C mark that this dance in reporting is ludicrous?

I'm starting to perceive the weather/climate distinction less as a matter of scientific certainty and more as an excuse to delay action for a long time. Here that distinction seems to be actively working against the cause of human survival by delaying what seems a truly obvious conclusion, and in doing so giving cover to inaction.

We already have a many year trend that shows things getting pretty steadily worse year over year, with not much backsliding, so it's not like we realistically have to wait 10 years to see if this surpassing 1.5°C is going to magically go away on its own. Indeed, by the time we get that much confirmation, these effects we fear will have seriously clubbed us over the head for too long.

«“The top ten hottest years on record have happened in the last ten years, including 2024,” António Guterres said in his New Year message, stressing that humanity has “no time to lose.”»
2024, Hottest Year on Record, Marks ‘Decade of Deadly Heat’

I keep seeing reports (several quoted by me here below) that we averaged above that in 2024, A haiku, in the ornate Papyrus font, that reads:

«sure, 1.5's bad
but we only just got there
wake me in ten years»

Below the haiku, in a smaller, more gray font, is added:

© 2025 Kent M Pitman so I find this predication on a pipe dream highly misleading.

Even just wordings suggesting that the crossing of some discrete boundary will trigger an effect, but that not crossing it will not, is misleading. It's not like 1.49°C will leave us with no loss of diversity, but 1.51°C will hit us with all these effects.

What needs to be said more plainly is this:

Significant crop diversity is being ever more lost in real time now, and this loss is a result of global average temperatures that are dangerous and getting moreso. That they are a specific value on an instantaneous or rolling average basis gives credibility and texture to this qualitative claim, but no comfort should be drawn from almost-ness nor from theoretical clains that action could yet pull us back from a precipice that there is not similarly substantiated qualitative reason to believe we are politically poised to make.

Science reporting does this kind of thing a lot. Someone will get funding to test whether humans need air to breathe but some accident of how the experiments are set up will find that only pregnant women under 30 were available for testing so the report will be a very specific about that and news reports will end up saying "new report proves pregnant women under 30 need air to breathe", which doesn't really tell the public the thing that the study really meant to report. Climate reporting is full of similarly overly specific claims that allow the public to dismiss the significance of what's really going on. People writing scientific reports need to be conscious of the fact that the reporting will be done in that way and that public inaction will be a direct result of such narrow reporting.

In the three reports that I quote below, the Berkeley report at least takes the time to say "recent warming trends and the lack of adequate mitigation measures make it clear that the 1.5 °C goal will not be met." We need more plain wordings like this, and even this needs to have been more prominently placed.

There is a conspiracy, intentional or not, between the writers of reports and the writers of articles. The article writer wants to quote the report, but the report wants to say something that has such technical accuracy that it will be misleading when quoted by someone writing articles. Some may say it's not an active conspiracy, just a negative synergy, but the effect is the same. Each party acts as if it is being conservative and careful, but the foreseeable combination of the two parts is anything but conservative or careful.

References
(bold added here for emphasis)

«The global annual average for 2024 in our dataset is estimated as 1.62 ± 0.06 °C (2.91 ± 0.11 °F) above the average during the period 1850 to 1900, which is traditionally used a reference for the pre-industrial period. […] A goal of keeping global warming to no more than 1.5 °C (2.7 °F) above pre-industrial has been an intense focus of international attention. This goal is defined based on multi-decadal averages, and so a single year above 1.5 °C (2.7 °F) does not directly constitute a failure. However, recent warming trends and the lack of adequate mitigation measures make it clear that the 1.5 °C goal will not be met. The long-term average of global temperature is likely to effectively cross the 1.5 °C (2.7 °F) threshold in the next 5-10 years. While the 1.5 °C goal will not be met, urgent action is still needed to limit man-made climate change.»
Global Temperature Report for 2024 (Berkeley Earth)

«The global average surface temperature was 1.55 °C (with a margin of uncertainty of ± 0.13 °C) above the 1850-1900 average, according to WMO’s consolidated analysis of the six datasets. This means that we have likely just experienced the first calendar year with a global mean temperature of more than 1.5°C above the 1850-1900 average.»
WMO confirms 2024 as warmest year on record at about 1.55°C above pre-industrial level

«NASA scientists further estimate Earth in 2024 was about 2.65 degrees Fahrenheit (1.47 degrees Celsius) warmer than the mid-19th century average (1850-1900). For more than half of 2024, average temperatures were more than 1.5 degrees Celsius above the baseline, and the annual average, with mathematical uncertainties, may have exceeded the level for the first time.»
Temperatures Rising: NASA Confirms 2024 Warmest Year on Record

Author's Notes:

If you got value from this post, please “Share” it.

This grew out of an essay I posted at Mastodon, and a haiku (senryu) that I later wrote as a way to distill out some key points.

Thursday, May 8, 2025

Linked World

[A simple image of the western hemisphere with continents in green and the ocean in blue.]

Inextricably Intertwined

Traditionally, business and politics have been separable in LinkedIn, but their overlap since November is far too substantive and immediate for that fiction to be further entertained.

[A white rectangle with blue lettering that spells 'Linked' and a globe after it, as if to say 'Linked world'. The globe shows the western hemisphere with continents in green and the oceans in blue. There is some similarity to a LinkedIn logo in general structure, though the relationship is intentionally approximate.]

And yet there are people on LinkedIn who still loudly complain that they come there to discuss business and are offended to see political discussion, as if it were mere distraction.

I don't know whether such remarks are born of obliviousness or privilege, but in my view these pleas lack grounding in practical reality. If there were a way to speak of business without reference to politics, I would do it out of mere simplicity. Why involve irrelevancies? But the two are just far too intertwined. US politics is no longer some minor detail, distinct from business. It is central to US business right now.

Some will see this shift as positive. Others will see it as negative. I'm one of those seeing consistent negatives. But whatever your leaning, it seems inescapable that politics is suddenly visibly intertwined with markets and products in new ways. Not every discussion must factor it in, but when it happens, it's not mere rudeness that has broken the traditional wall of separation. It's just no longer practical to maintain the polite fiction that there's no overlap.

Practical Examples

I find it impossible to see how a seismic shift like the US is undergoing could fail to affect funding sources and trends, individual business success, entire markets, and indeed whether the US is a good place for people to invest in, go to school in, or vacation in.

Nor are the sweeping effects of DOGE, Musk's Department of Government Efficiency, an issue of pure politics. Its actions have clear business impact. As Musk wields this mysterious and unaccountable force to slash through the heart of government agencies with reckless abandon, there are many clear effects that will profoundly affect business.

  • Scientists at the US Centers for Disease Control (CDC) and elsewhere have warned about the possibility of a bird flu or other pandemic. The CDC tracks and seeks ways to prevent pandemics, but that work is now under threat by an anti-science administration. As the Covid experience tells us, there is a business impact to pandemics if we allow them to just happen. A report in the National Institutes of Health (NIH)'s National Library of Medicine places that cost at about $16 trillion dollars.

  • The Federal Aviation Administration (FAA) is important to keeping planes in the air and having them not crash into one another. Business people do a lot of flying, so their needless deaths in the aftermath of FAA layoffs can presumably affect business. And it won't help people if the public develops a fear of flying.

  • The Food and Drug Administration (FDA) is in charge of making sure the food we eat does not poison us or that the drugs we take have at least a bounded degree of risk. It's the kind of thing you don't think might be business related until we enter a world where employees might go home any old day and just die because we are edging toward a society where you can't take food and drug safety for granted as a stable quantity any more.

  • The National Oceanic and Atmospheric Administration (NOAA) is responsible for tracking storms so that damage, injury, or death can be minimized. And then and the Federal Emergency Management Agency (FEMA) helps the recovery afterward. It is hard to see how a major storm could affect people, cities, or geographic regions without affecting the employees, customers, and products of businesses. Do I really have to say that? If people think there is a separation between business and politics, I guess I do.

    And then of course NOAA does work to study Climate Change, too. Not only has such study suggested that Climate Change is an existential threat to civilized society, perhaps to all humankind, but it turns out that if human society falls or humans go extinct, that will affect business, too. And maybe soon enough that people still alive now, even if they have no care about future humans, still need to care because it could affect them or those they love.

It used to be that business did not have to worry about such things as much exactly because government used to see it as its job to invisibly take care of these many things. But this change in politics is not just a change in spending, but a shift of responsibility from the government to businesses and individuals. They'll have to look out for themselves now. That is a big deal thing that will affect businesses—their products, employees, and customers in profound ways. All the more so because the present administration changes its mind daily in ways that seem to have no plan, so uncertainty abounds. Business hates uncertainty.

Unemployment

Additionally, the many layoffs in government mean additional unemployment, which itself has business effect. Perhaps some will rejoice at a plentiful supply of potential workers or the fact that they may accept lower wages. But, meanwhile, those unemployed were also the customer base of other businesses who will be less happy. Those people aren't in a position to buy as many things—not just luxuries but essentials like food and rent and healthcare. Perhaps others in their families will pitch in to help them survive, but then those people won't be in a position to buy as many things either.

Mass layoffs do not happen in a vacuum. Those political choices will show up on the bottom lines of businesses. Some businesses may not survive that loss of business, creating a cascade effect.

Racism and Xenophobia

Racism and xenophobia are on the rise. Recent ICE actions seem designed to send the message that we purposefully treat some humans like vermin. “Stay away,” it screams to a large swath of the global population, some of whom we might like to sell to or have invest in us.

It began by going after the undocumented, surely because they are easy targets. That circle is expanding, and it seems unlikely to stop any time soon. The goal seems to be to end any sense that anyone has rights at all. That creates a lot of uncertainty about what is allowed in the way of both speech and action. Such uncertainty makes it hard to plan and manage anything from the selection of an appropriate employee base to how products will be positioned and marketed.

Also, it's an ugly truth that the US relies on already-terrified undocumented employees to accept very low wages, sometimes perhaps skirting wage regulation. Many US businesses will lose access to such cheap labor. The ethics of having relied on this population in this way are certainly tangled and I don't want to defend this practice. But for purposes of this discussion I simply observe that this change will have business effects that may affect both prices and product availability.

It is as if the administration's answer to immigration concerns is to make the US seem as utterly hostile to anyone who is not a native-born, white, Christian male. These trends already affect who feels safe coming to the US to trade, to study, to do research, and to found companies. It's going to be hard to unring that bell.

Rule of Law

In addition, this process seems to be having the side-effect of diminishing rule of law generally. By asserting that due process is not required, when plainly it is, a test of wills is set up between the executive and the rest of the government as to whether the President can, by mere force of will, ignore the Constitution entirely.

The clear intent is to establish us as a bully power, to say that worrying about whether foreigners like the people of the US showed weakness, and that we must make the world fear us. That shift cannot help but affect who will do business with us and how.

We cannot expect our global peers, already horrified by the recent shift in our choice of which foreign entities to fund or ally ourselves with, to shrug these matters off in business with a casual "oh, that's just politics."

Education

Also, higher education is under assault. There is a complex ecology here because people from around the world have revered our universities as places they could send people to acquire a world class education. But with research funds being cut, that may no longer be so.

That the US Government seems intent on snatching foreign students off the street does not make this picture any better. It becomes a reason for international investment dollars to go to other countries where it is safe to walk the streets.

International Investment

The education system is not cleanly separated from the business community. There is a complex ecology in which many businesses locate themselves near universities to have access to the best human talent and research the world has to offer. As US educational institutions are undercut, and the administrations anti-science agenda is pursued, foreign businesses that take education and science more seriously may look elsewhere for leadership.

These capricious changes—the sense that nothing is promised or certain—may affect the reputation of the United States and trust in the US dollar. The present administration wants more control of the Federal Reserve, which has traditionally operated independently. If that happens, it could worsen faith in the US dollar.

The US has also weakened enforcement of anti-bribery laws for dealing with foreign governments. Perhaps some will regard this relaxation of ethics good for business, but whether you do or not, it is most certainly a major change.

And the US is demonstrating on-its-face incompetence at every level of government because everyone with a brain is deferring to someone who plainly lacks either understanding or caring about the damage he is doing. Foreign businesses and governments used to look to the US as a place that had something to teach, but as this incompetence continues unchecked, it cannot help but hurt our reputation internationally.

Philosophy of Government

There is a definite push to “run government like a business.” I think that's a terrible plan, as my recent essay Government is not a Business explains.

But whether you think running government that way is good or bad, it marks a profound shift. More privatization and, with that, probably more corruption. These are things that will profoundly affect not just the US political landscape, but also its business landscape.

Not Separable

Hopefully these examples make it clear that politics and business are no longer separable. It is simply impossible to discuss business in a way that neglects politics. All business in the US is now conducted in the shadow of a certain GOP Elephant that manages to insinuate itself into every room.

 


Author's Notes:

If you got value from this post, please “Share” it.

Some parts of this post originated as a comment by me on LinkedIn. Other parts were written separately with the intent of being yet another comment, but I finally went back and unified the two and pulled this out to a separate post where I was not space-limited.

The vague approximation to the LinkedIn logo was created by me from scratch in Gimp by looking at the LinkedIn logo and doing something suggestive of the same look. A globe image was obtained from publicdomainpictures.net under cc0 license, and post-processed by me in Gimp to work in this space. I just made guesses about sizes, proportions, fonts, and colors. At no time were any of actual logos used for any part of the creation.

Sunday, May 4, 2025

AI Users Bill of Rights

[A person sitting comfortably in an easy chair, protected by a force field that is holding numerous helpful robots from delivering food and other services.]

We are surrounded by too much helpful AI trying to insinuate itself into our lives. I would like the option of leaving “AI” tech turned off and invisible, though that's getting harder and harder.

I've drafted a draft version 1 of a bill of rights for humans who want the option to stay in control. Text in green is not part of the proposal. It is instead rationale or other metadata.

AI Users Bill of Rights
DRAFT, Version 1

  1. All use of “AI” features must be opt-in. No operating system or application may be delivered with “AI” defaultly enabled. Users must be allowed to select the option if they want it, but not penalized if they do not.

    Rationale:

    1. Part of human dignity is being allowed freedom of choice. An opt-out system is paternalistic.
    2. Some “AI” systems are not privacy friendly. If such systems are on by default until disabled, the privacy damage may be done by the time of opt-out.
    3. If the system is on by default, it's possible to claim that everyone has at least tried it and hence to over-hype the size of a user base, even to the point of fraudulently claiming users that are not real users.
  2. Enabling an “AI” requires a confirmation step. The options must be a simple “yes” or “no”.

    Rationale:

    1. It's easy to hit a button by accident that one does not understand, or to typo a command sequence. Asking explicitly means no user ends up in this new mode without realizing what has happened.
    2. It follows that the “no” may not be something like “not now” or any other variation that might seem to invite later system-initiated inquiry. Answering “no” should put the system or application back into the state of awaiting a user-initiated request.
  3. Giving permission to use an AI is not the same as giving permission to share the conversation or use it as training data. Each of these requires separate, affirmative, opt-in permissions.

    Rationale:

    1. If the metaphor is one of a private conversation among friends, one is entitled to exactly that—privacy and behavior on the part of the other party that is not exploitative.
    2. Not all “AI” agents in fact do violate privacy. By making these approvals explicit, there is a user-facing reminder for the ones that are more extractive that more use will be made of data than one may want.
  4. All buttons or command-sequences to enable “AI” must themselve be possible to disable or remove.

    Rationale:

    1. It may be possible for someone to enable “AI” without realizing it.
    2. It is too easy to enable “AI” as a typo. Providers of “AI” might even be tempted to place controls in places that encourage such typos.
  5. No application or system may put “AI” on the path to basic functionality. This is intended to be a layer above functionality that allows easier access to functionality in order to automate or speed up certain functions that might be slow or tedious to do manually.

    Rationale:

    1. Building this in to the basic functionality makes it hard to remove.
    2. Integrating it with basic functionality makes the basic functionality hard to test.
    3. If an “AI” is running erratically, it should be possible to isolate it for the purposes of debugging or testing.
    4. When analyzing situations forensically, this allows crisper attribution of blame.

With this, I hope those of us who choose to live in the ordinary human way, holding “AI” at bay, can do so comfortably.

 


Author's Notes:

If you got value from this post, please “Share” it.

The graphic was created at Abacus.ai using Claude Sonnet 3.7 and Flux 1.1 Ultra Pro, then cropped and scaled using Gimp.