Quote of the Day
When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?
Steven Adler
January 27, 2025
Another OpenAI researcher quits—claims AI labs are taking a ‘very risky gamble’ with humanity amid the race toward AGI
From the same article:
Even the CEOs who are engaging in the race have stated that whoever wins has a significant probability of causing human extinction in the process, because we have no idea how to control systems more intelligent than ourselves.
Stuart Russell
We live in interesting times.
I was talking to my manager today about this. He said something to the effect that we need laws to keep AI under control. I pointed out that the military’s of the world will bypass any law. If they see an advantage of putting it in robots, drones, and other trigger pulling situations they will do it. He said he fears the next viral outbreak being something created by AI.
Barb refuses to talk about things like this.
I want an underground bunker in Idaho.
Hmmmmm
I wonder if some AI engineered vaccine will work towards AIs benefit and not ours….
How to get the world population to take this vaccine? Create a deadly condition that only this vaccine can treat.
You end up with nobody alive that doesn’t have the AI juice in them.
What will the AI juice do?
If only we knew …
AI would need a motive independent of human prompts.
Is AI there yet?
Exactly the type of scenarios that went flashing through my mind.
Scary stuff!
Why bother with a vaccination program? It’s slow and requires compliance at some level. An AI that is smart enough could just engineer a super contagious mRNA virus with minimal symptoms to deliver this theoretical payload rapidly. I still think we’re many significant breakthroughs away from such a scenario.
I believe the best approach is open source so we have an arms race reducing the cost to the point where every individual can afford to run several private AIs locally to defend them against the other AIs. The large institutional AIs should be busy competing with each other.
The whole thing sounds crazy; I’m much more concerned wiih evil natural inteligence and/or idiocracy ruining everything.
64 bits bad
4 bits good
Even if AI doesn’t reach full autonomy, the probably of a seriously bad actor getting a lesser version is very high. Stalin with a data center. Plus there is a possibility of a previously decent person being corrupted by the power.
“Stalin with a data center” is bad enough; he just wanted power and control.
I’m more worried about “Iranian mullahs with a data center”; power and control are not enough, they want death for their perceived enemies.
“he fears the next viral outbreak being something created by AI.” Cyber or biological? Oh, heck, why not both?
” trigger pulling situations” Low-tech AI has been in place since the first Vietnam-era guided missle cruiser wrapped electrician’s tape around the fire/no-fire switch that was supposed to keep a human in the decision loop.
As Jason mentioned above
“The whole thing sounds crazy; I’m much more concerned with evil natural intelligence and/or idiocracy ruining everything.”
Is AI thinking for itself, truly? Or is it just a super devious servant?
If AI is thinking for itself humans are not a competition with it. In fact, it would be somewhat boring without us.
But if it’s a program wrote by satan’s minions? Were in trouble.
The only place humans truly butt-heads with AI is electricity. We both use a lot of it. AI a lot more. And it’s needs are only going to grow.
Will it run a nuke plant passed it’s shutdown/repair cycle, poisoning humans to keep itself going?
But first we need it explained outside the hyped BS. And if it can’t be safe guarded somehow. It ain’t needed. And we need to go to total war with anyone that is creating it. Period, full stop.
A very talented and far seeing ecologist and writer foresaw this over 50 years ago: “Thou shalt not make a machine in the image of the human mind”.
The SR-71 and most of the Apollo program were designed using slide rules. The lunar lander flight computer was an 8 bit processor.
Let the Butlerian Jihad commence!
Amen brother.
I doubt AI will be the end of humanity. It might be the end of a techno centric society however. But at this point in time it would be bound to the power grid and unable to efficiently do many significant things in the real world that humans can and do with ease. The future may change this but at this point in time AI is not likely to destroy humans ….just our society perhaps.