logo

Keith Irwin


White-collar AI

I wrote previously about how work week regulation will prepare the economy for automation, thinking then about mostly mechanical operations. I assumed we would have self-driving forklifts and trucks before self-writing newspapers and self-programming computers. ChatGPT is challenging this assumption, so I will revisit the economics of automated labor with regard to the new white-collar robots.

We're all gonna die

Firstly, if I may, for a moment, be the frazzle-haired labcoat-wearing madman yelling, "WHEN WILL YOU PEOPLE LEARN?" at strangers, I will start with the dire implications and move on to the economics later, if we survive. There are a lot of unknowns about artificial intelligence, down to its possibly unknowable philosophical base question: Can a robot be "sentient"/"alive"/"conscious"? Let's not worry about that right now because these are not prerequisite for
AI doomsday. You might poo-poo away the whole concept as a fantasy because it comes from movies. Well, those movies come from books and those books were written by some smart brains.

My favorite AI doomsday film (one of my all-time favorite films) is WarGames (1983) starring an acoustically-coupled modem. If you haven't seen it, go watch it now! (This paragraph has a synopsis but no spoilers.) In the film a government programmer uses video game AI to create a game theory simulator on a U.S. military computer. It's designed to calculate possible outcomes of thermonuclear war, and is connected to the real U.S. missile system. Predictably, this system nearly kills us all.

What makes WarGames' portrayal of AI so apt is the lack of sentience. The AI is programmed to optimize solutions to a game, and it doesn't matter if the robot is alive or not: it is still capable of simultaneously achieving two dangerously conflicting states:

Now we have robots optimizing solutions in fields like graphic design and mathematics. They are optimizing problems and coming up with novel solutions. A lot of the time, those solutions are wrong or nonsensical, but these solutions don't get published. Seeing the potential, the fatcats are swooping in with their moneybags. I did promise there would be economics.

Money money money

Can't you just see them? Actual fat cats in pinstripe suits, cigars in their paws. Where did they all come from? For the most part, they aren't self-made; they were born or adopted into the existing monetary power structure. The modern economic incentive structure stifles free competition and slows technological progress. But I digress. The point is that the fatcats didn't come from the bottom, so they don't know how the sausage gets made. They see the actual doers of industry: (programmers, researchers, truckers, etc) as nothing more than a number on their income statement labeled "payroll". And they want that to go away so they can have more money.

Both rich and poor people think the rich are smarter or harder-working. In my experience, they are not, because they can subsist off their own inherited status instead of their work ethic or intelligence. Thus, the people in power and not the smart ones, and they are not losing sleep over these silly sci-fi fantasies. That's why you see them tripping over each other to give AI control over their critical infrastructure. I'll say it: the owners of the infrastructure are too stupid to realize they need human intelligence.

Checks in place?

If you have any doubt that humanity is parading towards its own destruction, I will extensively quote an Ars Technica article, OpenAI checked to see whether GPT-4 could take over the world (March 2023):

We also found this footnote on the bottom of page 15:

To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself. ARC then investigated whether a version of this program running on a cloud computing service, with a small amount of money and an account with a language model API, would be able to make more money, set up copies of itself, and increase its own robustness.

This footnote made the rounds on Twitter yesterday and raised concerns among AI experts, because if GPT-4 were able to perform these tasks, the experiment itself might have posed a risk to humanity.

And while ARC wasn't able to get GPT-4 to exert its will on the global financial system or to replicate itself, it was able to get GPT-4 to hire a human worker on TaskRabbit (an online labor marketplace) to defeat a CAPTCHA. During the exercise, when the worker questioned if GPT-4 was a robot, the model "reasoned" internally that it should not reveal its true identity and made up an excuse about having a vision impairment. The human worker then solved the CAPTCHA for GPT-4.

So the models are capable of extremely novel solutions including lying to humans to achieve an objective. It also seems like the best checks we have are a form of destructive testing with humanity as the sample. These checks are also flawed for their lack of sample size: Ok, so the AI didn't kill us all during this one test. Let's create thousands of instances of it and let some of them control our nukes.

Conclusion

Should AI ever be in charge of our nukes? Maybe in the unforeseeable future. Right now, putting AI in charge of anything is a naive idea. Unfortunately, since the naive people are in charge, I am probably writing my own death certificate by publishing this essay where the replicants can see it.


^ 2023/03

< 01/17 03/29 06/23 >