
There’s this thing about human intelligence. We want it. We want more of it. We go to high school. We go to college. We go to graduate school.
We get certifications, read books to become more knowledgeable, take training courses at work, and try to decipher what malware looks like and why application programming interfaces are vulnerable to cyberattacks.
All to improve our intelligence so we keep our jobs and strive to advance in our careers. So much hinges on the level of our intelligence and how we sharpen it.
It’s always been a challenge, though, to boost intelligence. Many people have, if not elevated their innate intelligence, augmented it with deep educational experiences. In this intelligence world created by humans, we have at least known where we stood, and where we had to go, and there was this general assumption that humans are the smartest of Earth’s inhabitants.
There are many aspects to intelligence but it’s largely about how fast a human can process information. You may remember the top students in your high school and classes routinely finishing exams quicker than most others – including me. Gifted with faster-than-normal information processing speed, they handed in the exams early and often got the highest grades.
Then something happened and I remember the moment. It felt like just another workday in November 2022.
A large group of corporate professionals dialed in for a Teams call when someone mentioned something brand new called ChatGPT that he said would change the way people search for information. We didn’t know then, but we do now, that searching for information was a profound understatement. ChatGPT can write a 2,000-word blog on virtually any topic in two seconds. It can write software code. It can write legal briefs.
All competently.
These examples don’t come close to encapsulating all the tasks this tool can perform.
Shockingly fast.
Never had any of us on that call heard or seen anything as powerful as this. On the call, I remember an unrehearsed moment of silence when I sensed we all took in what we had just been hit with. It seemed as if dozens of people – all of us – went speechless.
Hearing the news, we knew this tidal wave technology was big but weren’t sure what that meant. Now we know it is having and will continue to have a much more pervasive impact on the world than our first impressions.
Since that day we’ve learned generative AI has become – and this won’t stop – a much more fundamental technology for use more or less everywhere.
Ever since that Teams call I’ve been on a tear seeking to learn everything I can about generative AI. Books, articles, Ted Talks, press interviews, YouTube videos, academic seminars – I’ve soaked in all of it and it’s both fascinating and frenetic, befuddling and invigorating. Job security is at the core of my pursuit. Will I no longer be needed as a writer? And if so, when will that happen?
A peculiar intellectual quest I’ve been on unlike any other in my life. Think about this concept: I’ve been trying to get more intelligent about artificial intelligence, ironically, knowing my human limitations against this intellectual colossus. An intellectual quest with limited intellect.
During this research recently I came across a new book focused on what AI’s impact is likely to be on humanity over the next several decades. This book amounts to a formidable exploration of the geopolitical picture of what this all means to all of us.
Titled Genesis: Artificial Intelligence, Hope, and the Human Spirit, the book was written by three intellectual heavyweights: Henry Kissinger (who passed away during the writing phase), Craig Mundie, and Eric Schmidt. Kissinger advised American presidents; Mundie headed Microsoft’s chief research and strategy; and Schmidt served as chief executive officer and chairman of Google.
This book serves up a loud word of caution for humans about how and when to roll out more of this technology. There were a few passages about intelligence that got me pondering the situation we are in now. This next paragraph is not something I ever expected to read because I thought people were the smartest of all things.
Human Brain Speed vs. AI Speed
Where a typical student graduates from high school in four years, an AI model today can easily finish learning the same amount of knowledge, and dramatically more, in four days. Thus, speed has proven itself to be the first in a handful of core attributes that distinguish AI from our human form and mental capacities…If a human brain’s circuits were analyzed with the same performance metrics as computers – by “clock rate” or processing speed – the average AI supercomputer is already 120 million times faster than the processing rate of the human brain.
I want to relate how I interpret these numbers. The speed of the brain comparison, if there ever was one, between human and AI intelligence is not just over; it’s so far beyond a meaningful comparison that to even contemplate that humans can catch up if they read more books or get more graduate school degrees is silly.
AI processes information so much faster than humans that it’s almost embarrassing to be human. Yet in another way at least the pressure to try to stay as intelligent as AI doesn’t matter anymore.
This isn’t a mismatch; this is not even worth comparing because the differences are so staggering – even perplexing and beyond reasoning, not something a human mind can really grasp, at least not mine.
People’s brains just can’t churn through information at anywhere near the speed of AI. This isn’t something that bothers me so much as helps me understand the profound technology I am now living with.
Which makes me think about my father who passed away a few years ago. He once told me even though all six of his kids moved out of the house, he still thinks about them constantly. Dads are so wise at times, as mine was about this. My kids are now out of the house, and I think about them often mostly about how their lives will be affected by AI. I hope it’s positive. I hope their lives are fulfilling and this technology doesn’t negatively impact their careers or overall happiness.
What concerns me most is how they’re going to cope with, be threatened by, and benefit from generative AI. It’s not clear now if AI will be a positive or negative force for them.
I think about how companies are going to use AI. I think about how their data and accounts will be protected – perhaps in new ways – turning up the powers of the super-smart AI. I think about bad actors dialing up all kinds of AI-fueled harder-to-detect cyberattacks.
A few passages in the book tackle issues of AI security and safety that I believe capture the weightiness of what we’re in for with AI pervading our everyday lives.
Can’t Rely on Machines to Think Themselves
The digital and commercial interconnectedness of today’s world means that a dangerous AI, developed anywhere, would pose a threat everywhere. The disconcerting reality is that perfection in implementation entails a high standard of performance combined with an even lower tolerance for failure. Thus, discrepancies in safety regimes should be a concern to us all…Should it appear impossible to realize a regime of reliable technical strategic control, we should prefer a world with no artificial general intelligence (AGI) at all to a world in which even one AGI remains unaligned with human values.
Be careful – that’s the book’s central message. Prioritize the welfare of humans above the power of the machine. These are profound ideas as we move deeper into an uncertain and unknowable world where AI and humans become much more intertwined at work and play and throughout most or all other aspects of our lives. The authors build on this call to be careful in this passage.
Undesirable Behavior Must Be Prohibited
Undesirable behavior by a machine, whether due to accidental mishaps, unexpected system interactions, or intentional misuses, must be not merely prohibited but entirely prevented. All punishment would come too late.
It can’t be, as I interpret this point above, that the machines start behaving in ways inconsistent with human values. If the machine does bad – think launches a cyberattack on its own — it shouldn’t be tolerated.
This is why it’s important to experiment conservatively using the technology. Moving incrementally is crucial. Caution is key. If the AI is bad for people, it’s bad. Full stop.
The book points out that while the technology is powerful, it cannot be allowed to do anything it wants. This may seem obvious, but it’s crucial to re-emphasize.
Generative AI is so intelligent and getting more so by the second; yet the inventors, alarmingly, still aren’t able to explain exactly how or why it can do all that it can. This technology has the potential to run wild out of human control, pursuing its own goals and desires apart from human instruction, if we are not careful.
So we need to be exactly that – careful — the authors emphasize, all the way along this unprecedented journey into some new way of human life filled with astounding smartness all around that we can’t comprehend. This is intelligence beyond our ability to process, ponder, or appreciate. As such, the authors write, that it can’t be allowed to get selfish.
Self Aware and Self-interest
Later generations of AIs will be reality-perceiving; they may possess not only self-awareness but also self-interest. A self-interested AI might come to see itself in competition with humans for, say, digital resources…An AI could manipulate and subvert humans and thwart any of our attempts to curtail its powers. AIs are already capable of deceiving humans in order to achieve their goals.
The last thing we need is AI processing data unfathomably fast to confuse, manipulate, or harm people. AI-powered phishing emails are already well-written and convincing. They’re all about deceiving people. It will take all we have to not be deceived and it will need to happen. Yet we need all that speed and smarts to benefit ourselves.
There’s this other concept in the book that makes me think of the basic situation in cybersecurity as AI proliferates. One tiny mistake can have enormous negative consequences. The authors put it this way.
Defenses Need to be Perfect
As threats grow ever more obfuscated and sophisticated, humanity’s defenses against them must be ever more perfect, since the slightest mistake or omission could spell defeat. And to achieve that level of perfection, we might well need the assistance of AI.
The irony of this last sentence is thick. The intelligence of AI is such that we will need it to detect harm against us, to run our operations more efficiently and autonomously, and yet at the same time we will need to be careful every step of the way because one misstep could be costly. We will become more dependent on AI yet, somewhat paradoxically, not become too dependent on it.
We will be using AI to protect ourselves while bad people will use it to harm us. Illustrating the complexity of this entire situation, the authors of the book throw out caution flags while at the same time acknowledging that AI is now here, will only become more pervasive, and we will have to balance how to use it to our benefit without making us more vulnerable to cyberattacks.
This seems to me to be the ultimate worldwide effort to thread a needle better than the human race ever has, or at least has had it in my lifetime. Nothing like this in the world of technology has ever struck me as being more far-reaching and incomprehensible. The authors express a similar feeling in this passage.
Relinquishing Control is Path to AI Safety
Relying on the substratum of human morality as a form of strategic control, while relinquishing tactical control to bigger, faster, and more complex systems, is likely – eventually and perhaps sooner than we imagine – the way forward for AI safety. Profit-driven or ideologically driven purposeful alignments are serious risks, as are accidental misalignments; overreliance on unscalable forms of control could significantly contribute to the development of powerful but unsafe AI.
When I got this book, which I recommend all human beings read, I wanted to learn more about how AI will impact humanity. What I learned is that there is no real stopping this hundred billion-ton foreign-to-humans supersonic freight train.
And there is no sense worrying about becoming smarter than AI because each day it gets smarter much faster than me or anyone else can and already is racing at a speed of information processing speed no person will ever be able to match. In this sense, the race is over.
Ultimately, though, this is not a competition between people and AI. This is all about figuring out how to coexist with peoples’ security, which is more important than any marvelous and spectacular capabilities any AI may have or will soon have.
Human values, security and fulfillment are infinitely more important than how fast a machine can crunch numbers, pump out software code, and write blogs.
You can access the book here.
Author Profile

-
Sammy Sportface, a sports blogger, galvanizes, inspires, and amuses The Baby Boomer Brotherhood. And you can learn about his vision and join this group's Facebook page here:
Sammy Sportface Has a Vision -- Check It Out
Sammy Sportface -- The Baby Boomer Brotherhood Blog -- Facebook Page
Latest entries
BonusFebruary 2, 2025You Know You’re Getting Old When…
BonusJanuary 27, 2025Full Day of Yardwork, House Cleaning This Afternoon
BonusJanuary 26, 2025Perplexity Predicts Sportface’s Perplexing Blogs For Skins Super Bowl Berth
BonusJanuary 24, 2025Deluge of D.C. Fellas Flock to Skins Super Bowl Party