I talk with people often these days about the magnitude of change that’s about to happen because of the powers of generative artificial intelligence (AI). I find these conversations sort of vague and meandering and while, yes, people are aware this is a powerful new technology, they don’t seem to understand why and how it all works. It’s hard to get your mind around this next big thing.
The technology concepts are new, nuanced, abstract, and frankly, hard to understand.
This is why I’ve been spending a lot of time focusing on grasping these issues so I could better explain them to you. The more we all understand about this the better we can prepare for what’s coming.
The person I go to most often for this understanding is the so-called “Godfather of AI,” Geoffrey Hinton, who for decades has been involved in figuring out how the brain works, and how that compares with how generative AI works.
I want to first address an important concept Hinton talks about often. It’s the concept of “weights” that are used in large language models that are like the main engines of generative AI. Hinton talks about these weights as being crucial secret sauces that make this technology able to produce blazing fast and stunningly accurate responses to our prompts.
Here’s a rather straightforward way to conceptualize these weights. You may remember when you received the syllabus for a college class and your grade would be computed as follows: 25 percent based on quizzes, 25 percent mid-term, and 50 percent final exam.
Now apply that same basic percentage breakdown to the weights in a large language model. You prompt it with this: find me a picture of a cat on the Internet.
Searching the database – essentially everything on the Internet – the technology starts by finding something that looks somewhat like a tail and assigns that a weight of 25 percent; it sees something that looks kind of like a cat ear and assigns that 25 percent; and it finds something that looks a bit like a cat’s face and assigns that 50 percent. So far the model has not found a complete image of a cat but has started piecing together what could be an image of one.
Now the model starts assigning different weights to what it has already found to increase the chances of finding more things on the Internet that could look like the cat. The goal here is to change the weights so the technology increases the accuracy of its predictions about finding things on the Internet that look like a cat.
For example, the model could adjust the weight to 60 percent for predicting clues to find the cat’s face, meaning the model gives more priority in searching for the cat’s face while less of a priority to finding the cat’s tail, down to 20 percent.
This is an iterative process of adjusting the weights – thinking, prioritizing, and emphasizing – to increase the accuracy of predictions.
Here’s another illustration of the concept of this weight. You prompt ChatGPT to write a 500-word blog for you on cybersecurity. The blog gets produced by an iterative process of increasing the weights – think connection strengths – to find and synthesize the data it finds on the Internet to write the blog coherently.
All this changing of the weights, higher and lower depending on what the technology learns, is one of the magical and incredibly powerful ways ChatGPT and other large language models can respond to your prompts with stunning accuracy, usefulness, and speed, predicting with better accuracy the next word or sentence as it learns more.
The model has weights for variables and keeps fine-tuning them to get closer and closer to the most accurate prediction for what should come next. In the context of writing the blog, these predictions could be which sentences should be written first, and second, and the sequence of words within each sentence. The model figures all that out by toggling up and down the different weights.
It is this subject of weights and the many millions of dollars it takes companies to create them that has Hinton especially worried; this technology may already be smarter than most or all humans and may soon seek to get more control of what they do. Meaning they’ll seek to get humans out of the loop, and then people are in danger of being controlled by these machines.
He believes companies that have unleashed for public use the secret sauce, these weights, have made a dangerous mistake. Because anyone using these highly valuable weights could be dangerous, they wouldn’t have to spend huge amounts of money and time to build them.
This is called the open-source model of generative AI; others offer proprietary models that don’t share the treasured weights. Bad people can use the weights to do bad things like launch harder to detect cyberattacks on a massive scale.
His worries about releasing the weights are one of many examples of him being blunt about the dangers he believes are probably forthcoming to humans.
In a recent presentation titled “Will Digital Intelligence Replace Biological Intelligence?” he explained the problem this way:
“The really efficient way to transfer knowledge is to have two different copies of the same model. And each copy goes and gets different experiences. Then the two copies share the gradient updates.
Note: Think of a gradient as like a sensor within a thermostat. It tells you when the room is too cold or hot and automatically adjusts. Generative AI uses the gradient to change its parameters with the endgame to iteratively continue to reduce mistakes, to get closer to making the correct prediction, such as what the next word in a sentence should be. It’s all related: weights and gradients aim to increase or decrease connections to improve the accuracy of predictions.
When Hinton talks numbers in this overall context, it starts to sink in, even more, why he’s so worried this technology may be getting – if it’s not already – too smart for humans to contend with.
“If you’ve got a trillion weights, you’re sharing a trillion numbers so it’s incredible bandwidth you’re sharing. And that’s why the big chatbots [eg: ChatGPT] have hugely more knowledge than any person. It’s not that one copy saw thousands of times more data; it’s that they can share the knowledge with many copies running on different hardware. They can share what they learn and you can learn a whole lot more.”
He breaks this down in numerical terms.
“So roughly speaking we [humans] have a hundred trillion connections. GPT4 probably has a few trillion connections. Yet it knows thousands of times more than us. So it’s of the order of a hundred thousand times better at compressing knowledge into connection strengths.”
My interpretation of this is: Generative AI is much faster at learning, sharing, storing, and remembering information than people are. I don’t know if I can write anything else that can make it clear we’ve got a serious threat to contend with.
We’re not nearly as smart, and we can’t share what we learn nearly as fast or as widely as these machines can. And if you shut off one machine, many other machines still have that shared learning. When a person dies, their brain dies. What they know can’t be shared anymore.
He puts this in the context of teachers and students.
“The way we get knowledge from one human brain to another one is the teacher says stuff.”
That, he says, is very inefficient compared with the rapid and broad sharing of knowledge on different hardware using generative AI. He says we’ve created an immortal being, and it’s not humans.
A sobering thought, don’t you think?
So is the topic of superintelligence that comes up again and again in the context of generative AI. Simply defined, superintelligence will be achieved when machines can know all human knowledge, being far superior in what they know and learns compared with people.
Hinton says: “Superintelligence could take control by having bad actors. The basic problem is if you want to do anything, having more control is better. It’s going to be just the same for these things. They [bad actors] are going to realize that to achieve things, they need more control. And they can do it by manipulating people because they will be very good at that. So we won’t be able to turn them off because they’ll explain to us why that’s a very bad idea.”
This is chilling. So is this.
“There’s a further problem as if that wasn’t not enough: the problem of evolution,” he says. “You don’t want to be on the wrong side of evolution. As soon as these superintelligent AIs start competing with each other for resources, the one that most aggressively wants to get everything for itself is going to win.”
I know what you’re thinking because I’m thinking the same thoughts: we’re in big trouble already and headed for more.
I don’t know how else to express this. This technology is dangerous. Yes, there will be plenty of benefits, but the risks are real and rampant.
I wouldn’t be as concerned if Hinton was someone who appeared to be saying alarming things just to make a name for himself and become rich and famous. Yet my gut tells me he isn’t that kind of guy. This man studied how the brain works his entire career so he understands how generative AI functions in a comparative sense. And his comparisons trouble him.
He’s worried about what he knows and believes. Almost none of us have spent our lives thinking about how the brain works. He has. Almost none of us were involved in creating generative AI technology. He was.
He’s assessing what he knows and understands and has a very creepy and ominous feeling that we need to move fast to make sure we can make this technology trained to never want to harm people. He doesn’t know how it can be done. He doesn’t know if it can be done. But he’s certain we need to see if we can – before it’s too late.
As unsettling as all this may be, all of us need to know this.
We can’t pretend it isn’t happening, that our futures will not be affected.
Massive changes are coming fast.
And they will be profound.
We need to get ready.
We need to protect ourselves.
Author Profile
-
Sammy Sportface, a sports blogger, galvanizes, inspires, and amuses The Baby Boomer Brotherhood. And you can learn about his vision and join this group's Facebook page here:
Sammy Sportface Has a Vision -- Check It Out
Sammy Sportface -- The Baby Boomer Brotherhood Blog -- Facebook Page
Latest entries
BonusJuly 21, 2025One Time Rudy Said to Sportface…
BonusJuly 19, 2025New Top Album Names – All-Time
BonusJuly 18, 2025Why AI is Scary – and What’s Likely to Happen
BonusJuly 16, 2025Sportface Gets Svelte, Chicks Can’t Stop Checking Him Out

Steelersforever.org