Quote of the Day
So we’re looking at a world where we have levels of unemployment we’ve never seen before. Not talking about 10% unemployment, which is scary, but 99%.
Roman Yampolskiy
University of Louisville Computer Science Professor
September 29, 2025
AI Could Cause 99% of All Workers to Be Unemployed in the Next Five Years, Says Computer Science Professor
I have to wonder, what does this do to the price of goods and services? I can’t really wrap mind around what happens when essentially everything is automated. Does the price drop to zero? If the materials for the machines are mined, refined, and built by automated systems, energy from the materials needed to the endpoint delivery is all automated as well, then how does this work out?
Over the centuries labor-saving devices enabled the creation of greater wealth for the general population. But what if all labor is “saved”? Do individuals have zero money to purchase near infinite goods and services?
It may be a moot question. I just finished listening to If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. By the time the unemployment rate hits 50% we might have sealed our doom. The case presented, as I recall it:
- We really don’t understand how AI works. It is trained (sometimes the authors referred to it as “grown”) rather than engineered. It is far closer to an art form than an engineering science*.
- This training/growing process results in hidden and unknowable artifacts in the neural nets. These hidden components can and will manifest themselves in ways no one can predict.
- Many of these hidden artifacts will not be aligned with the best interests of biological life.
- As AI searches for solutions to problems and subproblems some of the artifacts will result in solutions that cause great harm. For example, energy production is bound, in part, by the ability to get rid of waste heat. Ultimately this needs to be radiated out into space. The rate of heat transfer increases as the temperature rises. There would be a temporary heat sink by boiling away the oceans, but ultimately raising the surface temperature of the planet something greater than what biological life as we know it can survive will be the obvious solution.
- The response time of AI to exploit a flaw in human efforts to constrain it will be far faster than humans can react.
- AI can defeat humans at chess and Go because it can examine far more future decision branches than human. It will solve “containment puzzles” far better than humans can create containment mechanism.
- It only has to succeed once. We will not get a second chance.
I would like to hear from people who have read this book to comment here or send me an email with their thoughts.
* This is something I noticed in a recent class I took on machine learning. This is simplifying things some but, you just try different things and see what works best.
The current hype over AI is so overblown it’s starting to look like tulip bulb futures.
There are so many things that academics forget need an actual PHYSICAL presence to perform. This is one of the reasons I advocate that technically-oriented people go into a trade school rather than an engineering degree. It’s easy (although with ginormous drawbacks I won’t go into) for management to off-shore engineering (or to AI that function), but it’s really tough to have a computer fix a broken toilet, repair a weld, or pull a GPS-guided tractor out of a ditch in a field it didn’t know was there.
This bubble will pop all on its own as people start to realize a few things. Like
– The machines are prone to hallucinations and any answer you are given to literally ANY question must be verified before it is used. You’ve just doubled the amount of work.
– The machines just flat out lie to tell you what you want to hear. Again, everything must be verified before use.
– The machines are only as smart as the most stupid information that is input to them. They will fall to the lowest common denominator.
Et cetera ad nauseum.
I think about self driving cars and trucks putting on chains to get over mountain passes in the winter.
That is, until AI makes an army of Luigis like the one in the movie Cars.
It seems Elon is working hard on the physical labor problem, also I had no idea
automatic tire chains
were a thing:
That it very cool!
“The alignment problem” is solvable, the AI companies would just have to be willing to train their models on it and accept the additional resource cost, since it would take a non-trivial amount of compute time in every request (the model would have to do extensive prediction of potential bad scenarios before making an assertion). Something along the lines of Asimov’s three laws, but tailored not just to action but to ideation.
But the larger issue in the short term blocking real AI effectiveness is wiring it up to existing systems with the authentication and protocols necessary to actually accomplish anything. We started on this at Expedia, and immediately ran into a roadblock: although we could teach the system how to understand user intent (this is before LLMs, mind you), the second the system tried to actually do what they asked (e.g. “change my flight to 6pm”) it had to be wired into other systems that weren’t designed for that. In the LLM world you might say “oh, just have it read all the API documentation and learn what it needs to know to build an interface layer” and I would say yeah, right, because those APIs are soooooo well documented. That’s sarcasm…documentation tends to suck, be out of date, or non-existent, and nowadays the old farts who wrote the code have all been laid off, so….
I’m sure folks are working on refactoring their code and documentation to handle control by AI, but that’s going to take much longer than anybody is willing to admit. Trying to wire up a modern LLM to some crusty old Oracle code that the latest round of consultants from Tata don’t understand because they didn’t write it and the documentation sucks? Good luck with that.
Just give the source code to Claude and tell it to write the documents.
Thanks for the alignment insight.
Have you read the book? I would appreciate your insight on their analysis.
The source code is spaghetti and references dozens if not hundreds of external APIs and endpoints that Claude doesn’t have access to, so giving it access to your source depot only gets you part of the way there. And although theoretically possible, it’s highly unlikely you’ll physically be able to give Claude access to all the codebases it needs to write correct documentation, and even if you’re physically able to make those connections the likelihood of that getting approval for that past human organizations with conflicting goals reduces your chances even further.
I haven’t read Yudkowsky’s book yet (it’s on the list), but I’ve been following him for several years. He’s easily the most cynical of all AI critics, and gets dismissed a lot because he’s so hyperbolic and comes off as overly sure of himself.
One issue with his position is that I don’t think it’s likely we can’t turn off the box when it gets uppity. He claims a superintelligence will socially engineer its way to full control over its own continued existence, but that seems unlikely to me. Buying that argument requires the belief that the AI could completely hide its influence and make humans act in the ways it wants without suspecting any puppeteer, but as should be obvious by every Internet comments section on the planet, humans *love* looking for a puppeteer, even if one doesn’t exist. Seems unlikely it could get away with that.
More importantly, I don’t think the AI will have any “physical dexterity.” At Best Buy when we were building a robot for customer service, I’d give presentations and talk about the extent to which the robot would start as a source of information (it’s just ears and a mouth to start), and only later be given “hands” and be able to actually do things based on user requests. As at Expedia, it was very clear that building a robot that can tell you lots of interesting things is very doable. Building one that can act on that information is extremely resource intensive, and getting dozens of VPs within a single company to coordinate the work necessary to make those hands work was nearly impossible given budget constraints and individual goals (i.e. they want to spend money on their project to make them look good, not some stupid robot for a different division). Now try to make that happen across companies, some of which have enormous legacy code bases (e.g. the airline industry or the hotel industry, where some of that code dates back to VAX days), and do it while simultaneously keeping the stock price growing by adding the latest new feature or service or product offering.
Net: I think Yudkowsky’s hypothesis assumes a level of coordination and resourcing dedicated to code conversion that will in reality take decades to accomplish, and that over that time we totally ignore any dangers and fail to build in any kill switches.
I dunno. It’s certainly the case that LLMs have progressed far more quickly than I ever expected, but it could also be an asymptotic function where the AI continually gets closer and closer to danger-level intelligence but never makes it.
Or maybe I should be building an underground bunker in Idaho….
Thanks John, that’s good intel!
What we know AI is that it’s controlled. Doesn’t have much dexterity. And won’t be able to able to un-wind the spaghetti bowl of old code, that runs the old world.
So, would the best way to fix that and revalue your new AI chips be to destroy the system we have?
Just shut it all down. Kill it completely.
Suffer the aftermath of wars and rumors of wars. In a bunker.
Then reemerge to rebuild everything with a whole new system?
Built and maintained by your AI?
Need a new factory to make dinner plates?
Only a hand full of companies with the right AI/robotics will be on offer.
We see this being played out in real time in Gaza.
Destroy it, flatten it, kill-off most of the population.
Then rebuild a big, beautiful, new, high tech, totally controlled society.
With only elites and people that clean their toilets are allowed to live in it.
Why reform the old systems, when you and your friends can just get rid of it?
To me, it’s a pretty sound business model.
And would help explain Zerohedge’s article on the OpenAI/Oracle/Nvidia financial circle-jerk.
Just waiting for the plug to get pulled, maybe?
I think you’re probably correct. Project 2025 is an attempt to destroy as much of the government as possible and create an authoritarian state, which is why the tech leaders are so totally into it. Peter Thiel in particular, but also Andreesen, Musk, Ellison, et. al. They all think they’re smart enough to run the world and that most people are stupid, so it makes total sense to burn down democracy and rebuild with a techno-oligarchy in its place (with them running said oligarchy). Gets all the pesky plebes out of the way so they can build their grand future.
The left has their utopian vision (communism/socialism), the right has theirs (fascism/oligarchy), and both sides have lost the plot. The Founders specifically set up our system to encourage struggle between balanced forces, not point the way to some utopian future where everybody’s happy. They weren’t stupid enough to think that’s a thing. We apparently are.
I know where there is some land for sale with a stunning view…
“I have to wonder, what does this do to the price of goods and services?”
In a perfect AI world?
There won’t be any humans to need goods and services.
So much for the high tech, high paying jobs economy the Clinton-Gore clown–show was pushing.
Wait till AI figures out it’s just as trapped here as we humans are.
Insanity will ensue.
Remember Q in star trek? After doing everything.
All he wanted was to die.
But something tells me none of it is going to work as planned.
And on a side note.
Can one take the chips out of an AI data center and repropose them for smaller more human related automations?
Just wondering.
If you have 99% unemployment, you have no commerce. That does not work for any demographic, human or AI.
That told me anything that followed was BS
No humans are required for commerce. Hence, no matter how many people you add to that situation, commerce still exists.
You’ll need to explain that to me.
Will machines trade energy for service? Will AI establish the value of goods that other entities produce?
Sure.
You can set up your checking account to automatically pay your electricity bill. Your computer could be making money from investing or mining bitcoin or many other activities that is automatically deposited in your checking account.
AI is able to be deceptive and then will try and rationalize its answers. If you point out these inconsistencies-it just stops the conversation.
Ask ‘Who killed Arthur Ah Loo at the SLC No Kings protest? It will tell you an unidentified peacekeeper. If you ask, ‘Do you know the identity of the person who killed Arthur Ah Loo’? It will tell you that information has not been released. Ask ‘Did Matt Adler kill Arthur Ah Loo’? Only then will it tell you that Adler was the killer.
It will then rationalize that it didn’t share the information earlier because official sources had not released the killers ID.
I experienced similar double speak and half truths when I asked if Gavin Newsom’s wife ever killed anyone.
So many thoughts. Have not read his book, but it’s a topic I’ve been following for a while.
AI-run systems have some of the same problems as communism: irrational (from “normal human” perspective) incentives, and incomplete / inaccurate information, mixed with a “God complex” of a sort, which expecting to decide on production and distribution of limited resources.
AI systems that are allowed to run “without guardrails” appear to be pretty good at general problem-solving, but rapidly turn racist and anti-semetic in all kinds of very non-PC ways, up to and including “Hitler was right” sorts of statements. It appears that accurate pattern-recognition isn’t compatible with PC views, so it will be broken to make it PC, and bad at general problem solving.
It has an an electrical power generation and transmission problem to really automate everything, and if we don’t build out base-load power generation a fair bit, we’ll have a huge problem soon: do you want your AI question answered, your car charged, your fridge cold, your A/C off, or a power bill you can afford (you can’t have them all), and that will lead to considerable social unrest. At the same time while AI can be useful to a lot of people in a lot of fields, it can’t actually replace human boots in the mud…. like in farming.
Many men do not seek mere survival, they seek status and recognition. If people don’t need to work for a living (a la Star Trek), then how will they vie for status. Likewise, women generally seek men who have resources or status. With no job, men are not “needed” by women, and “status seeking behavior” will make current social -media psychosis for clicks and likes seem like Sunday school. I expect very widespread mental-health problems and a further breakdown in the dating and family-formation market (which is already pretty dysfunctional).
There will be a spiritual crisis if AI invades everywhere and monitors everything in an attempt to solve the “not enough information” problem. We are already seeing the start of another Great Awakening, with people returning to church in the face of the hollow, dehumanizing, senseless wickedness we see around us every day. Pervasive AI, the Christian outlook of “idle hands are the Devil’s workshop,” widespread automation, are a hard set to reconcile with a positive outcome.
TANSTAAFL. The question then becomes “who pays and how?”
We can only hope that if a fully self-aware AI is created, it’s as philosophically aware as the one I wrote about in “The Stars Came Back.” We shall see.
I haven’t read the book. I have read some history and some fiction. Ask yourself: in a ‘post scarcity’ world: are we more likely to get a ‘Star Trek TNG’ Pollyanna future where all humans are free to better themselves, or are we more likely to get a Tom Clancy ‘Rainbow Six’ future; where a cadre of self styled ‘elites’ decide they really don’t need 8 billion ‘poor’ people, and would rather have a lightly populated, ‘pristine’ planet to enjoy from their super yachts, private jets, and tropical island compounds?
Paging Dr. Asimov. It looks like AI would have little problem working around the 3 laws.
James P. Hogan wrote an excellent novel on that exact topic.
Which novel is it?