There’s a man out there in the world going on podcasts and press interviews all over the place, saying alarming and borderline shocking things about generative artificial intelligence (AI), that it has the potential to completely shake up human existence.
His name is Geoffrey Hinton, named the “Godfather of AI.” Until a few years ago, he led AI research teams at Google but then left so he could speak freely about what he really believes this technology could do without concerns about his opinions might impact Google.
His opinions matter. As a cognitive psychologist, he has for decades been studying AI and how the human brain works, and was there at Google, seeing the incredible advances in generative AI during the past decade.
While at Google, he became concerned that the machines were getting so intelligent that they would eventually become smarter than humans. It made him genuinely worried. His big concern is that the machines will get so smart that humans won’t be able to control them, and if that happens, we don’t know how the machines could behave. They could decide to get more control by themselves, and then they manipulate people to do what the machine wants. Losing control is not a good thing at all, in his view.
In the history of the world, he points out, there have been very few situations where a less intelligent species was able to control a more intelligent one. So if the machines get smarter than people, he reasons, they could have control over people. Take over us.
What we need to do now – before it’s too late – is do everything we can to make sure generative AI will only do what people want it to do and what’s in the best interests of people, he says. He does not believe businesses leading in this technology race for the ages are spending nearly enough of their time and resources to figure out how to keep AI focused on doing what is in the best interest of people, and he believes time could be running out.
We still have time, he believes, but not much. Leading companies should be investing much more in figuring out if they can keep this technology safe and in the control of people. He is not sure this is possible.
This is chilling. He points out AI knows all the knowledge out there on the Internet, which means it knows about all the ways people behave poorly towards each other, commit evil acts. It knows the entire recorded history of how people manipulate each other.
So he suggests the technology could access that knowledge and use it to inflict harm on people. Not a pleasant thought, but important to know.
As AI starts to reason and problem solve more, breaking down problems into sub-tasks – which is the coming age of agentic AI – the technology will start to recognize that to complete these tasks, it will benefit from figuring out it needs more control. It will want a person to be less involved.
We are in precarious times. AI will become a more the decision-maker. How it does this may not be what’s good for people.
It’s worrisome to listen to him. You might be dismissive that he’s an ignorant doomsayer chasing online clicks. This wouldn’t be wise. This man, a Nobel Prize Winner for his work, understands better than almost anyone on Earth how the human brain works and compares those insights to generative AI “thinking.”
One of the points he makes is that the human brain has many more neurons than generative AI, and yet generative AI can process information so much faster. He is not sure why, but he is sure this neuron discrepancy is extremely important.
Somehow, mankind has created a more intelligent technology, using fewer resources. It’s much more efficient than the human brain, and neither Hinton nor anyone else is totally sure why.
It bothers me they don’t. How can they have allowed this technology to be widely proliferating around the world without fully understanding how it works? That should be known. It’s as if, for greed and power, they rolled out a new airplane without being sure how it flies. Incredibly irresponsible.
Yet here we are.
Hinton says generative AI is already being used by militaries around the world. There it will be put to use to do bad things – like kill enemy soldiers. And who knows what else? Robots at war. Potentially out of control, not doing what humans want them to.
Hearing all his alarming messages, he seems to me to be a guy who, one day, was working at Google and saw how intractably smart generative AI had become and just felt dread about what was going on, a technology he had a key role in developing. It’s clear – because he admits it – that generative AI got much smarter, much faster than he expected. He and the rest of us were caught off guard. The world is in collective shock.
A key to all of this, he says, is a thing called generative AI uses, “back propagation.” To describe conceptually with a broad stroke, back propagation improves the accuracy in recognizing patterns, all of which fuels this amazing technology – again, which the smartest people in the world are still unable to fully explain, a thing they’re still bewildered about.
It’s this back propagation that is stunningly effective. Think of this as if generative AI gets prompted to explain what gen AI is and keeps going back through all the patterns it recognizes to define the technology, getting better and better at recognizing the patterns in words to define what it is.
At supersonic speeds, the technology keeps iterating, calculating, and pattern recognizing over and over (going through cycles of back propagation), and gets closer and closer to the right answer, smarter and smarter, pumping out incredibly accurate responses to prompts in split seconds.
Whatever else back propagation is all about, trust me on this: Hinton talks about the magic of it a lot as a major breakthrough in making this technology as smart and fast as it is. He believes it’s a key linchpin of this whole phenomenon.
A huge debate is raging about whether generative AI is conscious, has subjective experiences, and can feel emotions like people do. Arguments are so heated, nuanced, and impossible to confirm that I won’t get into all that here, other than to point out that Hinton feels quite sure generative AI can make subjective decisions like people do.
As it learns constantly – including this very second – he believes the technology will continue to think more and more like people, except at much higher levels and speeds. Emotions, feelings, and being conscious are also very possible, he believes.
This man doesn’t paint a comfortable picture, but this is an unsettling situation. I wish I could say he doesn’t know what he’s talking about, but that’s not true. This is a bona fide AI expert.
I wish I could say he’s not an expert on this, but he is.
I don’t believe he thinks what’s coming is the end of mankind, although he doesn’t rule it out. I do believe we’re in for massive societal upheaval. Jobs will be replaced across the world in numbers probably never seen during any other seismic technological paradigm shift.
I believe unrest is already here. So many of us are worrying about what’s going to happen. More widespread and prolonged unrest is inevitable. The question is, when will all this happen?
It already is.
Author Profile
-
Sammy Sportface, a sports blogger, galvanizes, inspires, and amuses The Baby Boomer Brotherhood. And you can learn about his vision and join this group's Facebook page here:
Sammy Sportface Has a Vision -- Check It Out
Sammy Sportface -- The Baby Boomer Brotherhood Blog -- Facebook Page
Latest entries
BonusJuly 21, 2025One Time Rudy Said to Sportface…
BonusJuly 19, 2025New Top Album Names – All-Time
BonusJuly 18, 2025Why AI is Scary – and What’s Likely to Happen
BonusJuly 16, 2025Sportface Gets Svelte, Chicks Can’t Stop Checking Him Out

Steelersforever.org