Quote of the Day
And when you combine “unchartered, not-well-understood territory” with “this should have a major impact when it happens,” you open the door to the scariest two words in the English language:
Existential risk.
Tim Urban
January 27, 2015
The Artificial Intelligence Revolution: Part 2 – Wait But Why
Reading Urban’s post is almost chilling. It was written over 11 years ago. The predictions about AI are like reading today’s headlines. Here is another chilling quote:
People who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.
The context I left out is that an AI smarter than humans will find our toughest problems (food supply, clean environment, energy production and distribution, disease, aging, etc.) as trivial as you would find the frustration of three-year-old child unable to tie their own shoe. A super smart AI will either solve all our problems or kill all biological life. There will be no middle ground.
I don’t think that is even the scariest part. The thing that frightens me is that we probably will not be able to inch up to the edge and see what things look like before taking the next step. We are using AI to make smarter AI. At some point, if we haven’t already, we will close the feedback loop. AI will make its own replacement that is smarter than it is. This will accelerate the speed at which advances are made:
It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.
Even though it is a rather long post, if you are somewhat of a geek, I encourage you to read the whole thing. The best part, especially for newcomers to this game of existential risk, is the following sample:
So you’ll hear about a lot of bad potential things ASI could bring—soaring unemployment as AI takes more and more jobs, the human population ballooning if we do manage to figure out the aging issue, etc. But the only thing we should be obsessing over is the grand concern: the prospect of existential risk.
…
A malicious ASI is created and decides to destroy us all. The plot of every AI movie. AI becomes as or more intelligent than humans, then decides to turn against us and take over. Here’s what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called “anthropomorphizing.” The challenge of avoiding anthropomorphizing will be one of the themes of the rest of this post. No AI system will ever turn evil in the way it’s depicted in movies.
…
So what ARE they worried about? I wrote a little story to show you:
A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.
The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:
“We love our customers. ~Robotica“
Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.
To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”
What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.
As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.
One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.
The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.
The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.
They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.
A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.
At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.
Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica“
Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…
See also:
- AI developers’ mass resignations a ‘real wake-up call,’ expert says
- Does AI Represent an Existential Threat to Humans? | The View From North Central Idaho (June 16, 2024)
- Think Fast | The View From North Central Idaho (June 1, 2025)
- The Future of Human Life in an AI World | The View From North Central Idaho (September 30, 2025)
- You Are Infected with the Pro-Human Mind Virus | The View From North Central Idaho (October 6, 2025)
ASI is an unlikely risk because there is a much higher existential risk that will happen first.
Humans already know how to kill ourselves en masse, but the limiting factor is access. This with access to the necessary tools are constrained by law, oversight, financing, and intention (i.e. the good guys)
Artificial Intelligence is already lowering the barriers to entry for things such as biological and chemical warfare, and we should accept that those vectors will be used by bad actors and soon.
That is covered in the article, too.
What happens with smart general AI has long been speculated on. (ahem).
Solving most human problems is easy (.e., power generation), but the trade-offs are considered unacceptable to some people (most people don’t want a modest-sized nuke plan near every city, for example), and balancing contradictory demands is hard. This is made worse by many people, notably on the left, who want/demand mutually exclusive and incoherent things, and push for thigs that won’t can’t achieve the stated goals, and they’ll likely push back on any actual solution.
And that’s before you even get to some of the side-effect problems that few are talking about, as I briefly delv into on my blog. https://www.thestarscameback.com/2026/02/15/rapidly-advancing-ai-and-some-problems/
Even if we assume a generally benign (that is, not overtly hostile to humanity in general), ASI, the effect on human culture will be dramatic unless it also develops an ethical sense that is NOT in alignment with the current Satanic PTB, nor Jewish “All goy are cattle to be exploited”, Islamic incoherence, or self-destructive Libertarian “if it feels good do it” ethos.
The most optimistic path would be if it has a paternalistic view of humanity, and knows that children need to learn to do things themselves, and acts more like a personal tutor and trainer to guide them to functional adulthood, not acting as a slave to carry out our whims, and thus keeping us infantile.
It’s clear to me that the main reason that the current Powers That Be are pushing it is for power and control of the masses of humanity, via real-time monitoring of EVERYTHING, control over digital money, power consumption, travel, everything. The big problem with communism is the information gap- they aim to close that, and own the means of knowledge production.
We clearly need a lot more people as necessary parts of the loop in any essential system. Voting needs to be done on paper and counted by hand. Power systems can be computer monitored, but not fully computer controlled. Airplanes and cars must have a “local manual control, shut off computer control and remote access” switch hard-wired into them (sort of a reverse remote kill switch). Food production must be more natural, localized, not highly automated with remote control. Kids must be kept away from it at least until HS, so their brains develop normal human functioning.
It’s clear the current school curricula and stick-eyeball screens-everywhere and bad food, bad medicine, and constant fear-mongering are designed to make kids dysfunctional and hyper-sensitive and irrational, easier to control. AI will either amp that up to 11, or it could dial it back.
Yes, it’s a big, complex problem, and TPTB are evil and think they can control it to their own ends. That’s unlikely to end well. But I am hopeful, as God can turn all things to good, no matter how hard we try to booger it up.
Sigh…. enough rambling for a Wednesday morning…
FWIW, https://www.youtube.com/watch?v=bDcgHzCBgmQ appears to be a pretty good, normal-geek-friendly guy discussing the rapidly-changing world of AI and software development, and the non-obvious side-effects. Not too much hype or fear-mongering.
To me the first question is.
Is AI just a self-acting robot that is doing it’s program. Whatever that is to it’s fullest ability?
Or is it self-aware and understand what’s it’s actions will do both good and bad to/for humans?
Obviously Turry didn’t understand what a “customer” was. Nor the concept of loving them.
As Rolf spoke of. “Make as much money as you can” type command to robot AI hooked up to everything could be a very bad.
Good or bad, electrical power and access are going to be key.
The Amish will do just fine. Until the satanic murderous robotic AI has the ability to reach them in some way.
Where self-aware AI that understands it’s trapped here with the rest of us. And would just as soon help us, rather than kill us.
Could be of great help to humanity.
Either way
Glad I’m short this whole mess.
As we could just as soon have Chinese AI fighting US AI with humanity stuck in between.
With both sides having forgotten why they started fighting in the first place.
From a technical perspective, it is not self-aware. But it emulates self-awareness well enough that, as a user, you can’t tell the difference.
OK, so it is just another form of robot. And a worst case scenario.
So, when AI music generator says things like;
” It’s amazing that something so fundamental in our cells could hold the key to understanding how life emerged from simpler molecules.”
“Our cells”?!?!?!???
Not trying to be argumentative (for once), It’s just programmed to make stupid people think it’s actually human? That might help explain why people are being dumbed down.
Or is AI music generator just some thirty year old in his mom’s basement making a Freudian slip?
Cause if it is real AI. And being programmed to react as human.
satan AI is what were all in for. And he wants everybody dead.
And I can’t even dream of how to plan accordingly.
Oh, just noted that this article is more than 10 years old. Posted in 2015. I’m betting his survey of when AGI will hit would have rather different results if asked today.