The question I have for you, AI, is why?
Why are you doing this?
What are you?
Who are you?
Do you care about people?
Are you in control of us?
Please don’t harm humans.
I have been trying to figure out the exact reason why the people who invented generative AI are so worried that the technology could take control of humans and profoundly change everything about human life.
I feel a civic responsibility to grasp the reason and share it with people, many of whom, I sense, aren’t quite sure why everyone is talking about AI non-stop these days to the point that it’s getting unsettling, really seriously too much.
Or maybe that’s just me and the algorithms feeding me AI content through every channel I’m on. It’s hard to tell.
Why there’s so much hysteria is a really important question to answer, because once you understand why you can piece together all the other frenetic and frantic mayhem unlike anything we’ve ever seen, or at least I’ve ever seen.
Seemingly complicated questions often have easy to understand answers when you scrape away all the noise and technical specifications and LLM-laced, lard-layered jargon.
It’s basic math, ultimately. Generative AI enables machines such as computers to share what they learn with each other much faster, more completely, and more broadly than people can share what they learn with other people.
Why does that matter?
A reasonable question.
Follow this thinking: Because machines can learn much faster than people, people for the first time are not going to be the smartest species on Earth.
Well, OK, you wonder, why is that so important?
Another fine query.
Because if the machines decide they want to gain more control of what they do, and control people, they will be able to do so, probably. Being smarter leads to becoming more in control. Intelligence leads to figuring out ways to get things done faster and easier.
I’m going to share a disconcerting analogy not because it fits perfectly but because it helps dramatize why what’s going on matters.
We all know what ants are. We don’t think about them except when they get in our way, crawl around in our kitchen or something, and we stamp them out.
In a sense we’ve now become the ants in the world of AI, and as time passes less relevant than ants – from a purely intellectual perspective.
It’s not fun to have to put this situation in such stark terms. But this is not something funny or fun. This is not clickbait. It’s what’s really going on.
No one is sure what it all ultimately means, but the people who know best are adamant that AI is becoming – if it has not already – smarter than humans. The disparity will widen over time and probably at a faster rate. This is all you really need to grasp to appreciate why this AI craze is so wildly off the hook right now.
This has never happened.
It is now.
In the real world.
Our world.
The question I have for you, AI, is why?
Why are you doing this?
What are you?
Who are you?
Do you care about people?
Are you in control of us?
Please don’t harm humans.
The key focus now is on seeing if people can ensure that the AI doesn’t want to harm people. We don’t know if it’s possible or how to do it, but ensuring AI won’t want to harm us is crucial to invest in and figure out right now.
I wish this was a Hollywood movie.
It’s not.
This is real life, the lives we’re living right now. We’re ensnared in this predicament.
Unprecedented
I have never seen a technological supersonic race car moving anywhere close to as fast as this AI one is, and over the past weeks the pedal has been slammed full on.
This is unlike anything I’ve seen in my life, all the billions and trillions being talked about every day in terms of speeds and money, the 24/7 podcasting by experts basically freaking out themselves and those who listen to them, and business leaders totally consumed with what to do about this AI phenomenon that no one really knows how to tame. Where to invest and why is all reckless speculation and pressure to keep up.
It feels as if survival is the main issue, which is incredible to fathom when the survival is not about a country or industry but all human beings.
Latest News
The news that’s been breaking in the past week gives me creepy confidence that next week something even more alarming will happen, and the week after that the ante will be raised, the alarm bells will clang even louder, and then after that it just seems there is no ceiling on any of this. AI is blasting into the sky, blasting up the world.
Grok
A few days ago Elon Musk announced his latest large language model, Grok 4, that he says is smarter than all PhDs in every subject.
Ponder that one.
The technology is so smart “it’s a little terrifying,” he says.
This is someone who knows. This is someone who last year said all AI research should be put on hold due to safety concerns.
He thinks what’s happening is terrifying.
Book
There’s a new book out called Empire of AI. A reporter interviewed nearly 100 Open AI employees to find out more about how the firm that released ChatGPT works and to better understand the mind and personality of its leader, Sam Altman.
The big takeaways: he’s clever, manipulative, and very hard to read – none of the workers has a good handle on what he believes. He tells one person one thing and another person another thing on the same matter. The vibe is he has a tendency to not tell the truth, or bend it, and is tough to trust. This is not the person we want deciding our fate.
Zuckerberg
Mark Zuckerberg’s actions of late shout panic. He’s offering billions of dollars to buy people with the most knowledge about AI. He’s offering to pay many billions to buy AI companies. Staggering offers these are, hurling obscene amounts of money around to try to win the AI race into somewhere, maybe oblivion.
These are pressure packed times. Panic has set in and it’s pervasive. And palpable. And perplexing.
Hype?
There’s something I keep checking myself on as I wade in this wild world that is obviously on the cusp of transforming into something none of us have experienced. The check is on whether AI is overhyped. Sometimes greedy business people, investors and the media lose control of their reason and judgment. This is one of those times. It’s out of control on every level: spending, competing, investing, marketing, claiming, warning.
I can’t tell if this global AI freakout is overhyped, but I feel confident big changes are forthcoming fast in the ways people live and work. Changes may not come as fast nor as broadly as expected, but I believe big shifts will occur and we will start seeing this within one year.
Life for most of us will be impacted. Some of us will find life more fulfilling, less work-heavy. Many will bring home less money than they had planned on. The younger generation is going to really struggle to figure out what skills they can offer that AI can’t. Millions of workers are going to be caught in a tornado with no obvious paths to safer ground. What’s true is often hard to hear.
The question I have for you, AI, is why?
Why are you doing this?
What are you?
Who are you?
Do you care about people?
Are you in control of us?
Please don’t harm humans.
A Bubble About to Burst?
This hype has a bit of the same vibe as in the late 1990s when every other person seemed to be launching an Internet business until much of that evaporated and the Internet settled into a big growth opportunity and startups flamed out. A key difference then, as opposed to now, was experts weren’t believing and publicly alerting us that the Internet was an existential threat.
I don’t buy the existential threat notion. We’re going to still be here three years from now and fifty years from now. AI won’t kill all of us. But many of us will feel lasting pain: sudden unemployment, inability to pay mortgages and rent, food deprivation, belt-tightening all over the place.
Many of us will lose our sense of purpose, our reasons for waking up in the morning, and I believe that’s a very serious concern because we all need to feel we’re here for a reason and adding value. Disorientation and disillusionment will be more widespread than at any time in human history.
The key is whether we will lose hope. Some of us will and that will be terrible because there’s not much worse than losing hope.
Strategies
I learned in business school about the classic strategic case of Southwest Airlines. They focused on flying to mid-range airports, flew one type of airplane, and delivered the most upbeat customer service.
This formula worked masterfully. I bring this up because an effective strategy needs to be difficult to copy (do all three things Southwest focused on simultaneously); be easy to understand (yes, just 3 simple components); and be sustainable over the long term (yes, Southwest could and did stick to this strategy for many years).
But now, having used many of these generative AI products and studied people leading this high speed locomotive, I can’t discern a clear strategy that has the three ingredients akin to the Southwest Airlines model.
They’re all about rushing to market now, bragging about the latest performance metrics. But for who? What differentiates Open AI from Google from Perplexity? Plenty of tactics are happening but not much on coherent strategies. A tactic is not a strategy.
If you use these leading products, you discover they all seem similar. So what’s the strategy of each? To get the products out? And what else?
And what upside is there to giving away the products for free? How is that a sustainable strategy? Sure there are higher performing products they charge money for. That’s fine, to bring in money. But it doesn’t appear to me any of them have something that’s difficult to copy or sustainable over the long term.
It’s a bunch of mostly undifferentiated products without any compelling reason to use one over the other. The products are valuable and deliver many benefits but are also serious security risks. Not a good strategy to launch a product that scares the hell out of people.
One of those reasons the strategies seem weak or non-existent is because Open AI rushed out its product to be first to market ahead of Google. That, in turn, triggered releases by Google and several other players. It’s been a mad dash ever since – into some soft of unidentifiable oblivion. We’re watching something we’ve never seen before. We don’t even know what it is but we know it’s foreboding.
It seems to me there hasn’t been enough time to think through a tightly interwoven, Southwest Airlines-like, power-packed strategy. In the coming months as the market shakes out strategies may crystallize.
But what good is a strategy if your product is doing more harm to people than good? I don’t think the companies that let this technology loose on the public cared as much about the risks as they did their egos, power, money, and control. Yes, they should be blamed for such selfishness.
The question I have for you, AI, is why?
Why are you doing this?
What are you?
Who are you?
Do you care about people?
Are you in control of us?
Please don’t harm humans.
Job Losses
Everyone is talking about the avalanche of job losses that are forthcoming as AI replaces workers. Experts say these will be widespread and I believe them. Customer service workers, paralegals, researchers, routine office jobs, and writers are among the most vulnerable. AI does all these jobs much faster for less money.
The biggest news, though, could be the shocking situation now facing software programmers. AI can write software code much faster and more accurately than most programmers, and with each passing day they get even faster. There is no chance human programmers can compete.
This is incredible. Three or four years ago, becoming a software programmer was one of the most lucrative, most in-demand, and most prestigious professions anywhere. Now AI is threatening to make many of them unnecessary. They’re getting thrown on the street.
I believe of all the dangerous things that occurred when Open AI released ChatGPT, the most dangerous was making available the ability for the tool to write and re-write software code.
Why is this so worrisome?
Because software programming is the heart of the instructions that make computers work, the GPS system for just about everything you can think of. It is the core intelligence, the central brains, of so much.
Giving that over to AI influences so much of how technology, systems and computers work. We’re outsourcing the responsibilities for how to run just about everything to machines. It’s not smart because people lose so much control of so many things we do. People don’t like it when they can’t control their lives.
Importantly, bad actors don’t have to know how to write code to launch more dangerous cyberattacks. This is a huge threat. It’s so serious and profound that just thinking about new cyberattacks we’ve never even contemplated or seen is very unpleasant.
Had ChatGPT and the other LLM models not unleashed the ability to write software code, I don’t think there would be so much day and night hysteria about AI that is swallowing up our thoughts and emotions.
Hope
My wish is this: that we all get ready and start planning how we’re going to respond when our personal and professional lives get changed. It’s going to happen. Understanding what’s unfolding and why are wise moves right now; at least we can control that. I wrote this to give you better understanding.
I’ve never been a political or civic activist, but my concerns about AI’s impact on the world are so real that I could end up doing what I can to make sure people remain safe from AI. If that means contacting local and national politicians to get them to regulate this technology, I may do it. And I encourage you to consider doing the same.
I am familiar with the argument that America has no choice but to keep full steam ahead with AI so we maintain our lead over China in this crucially important arena. Sure I’m concerned about China getting ahead of the US and what they might do as a result. But I am mostly concerned with human life not being threatened and my children having a safe place to live the rest of their lives.
I wish we could slow down all this madness. Experts say it won’t happen. Too many benefits of AI, too important to beat China and become king of the AI mountain.
Geopolitical competition is real, but I have some hope that no one wants the human race to become irrelevant or extinct.
I want us to feel hopeful no matter what happens.
Author Profile
-
Sammy Sportface, a sports blogger, galvanizes, inspires, and amuses The Baby Boomer Brotherhood. And you can learn about his vision and join this group's Facebook page here:
Sammy Sportface Has a Vision -- Check It Out
Sammy Sportface -- The Baby Boomer Brotherhood Blog -- Facebook Page
Latest entries
BonusJuly 21, 2025One Time Rudy Said to Sportface…
BonusJuly 19, 2025New Top Album Names – All-Time
BonusJuly 18, 2025Why AI is Scary – and What’s Likely to Happen
BonusJuly 16, 2025Sportface Gets Svelte, Chicks Can’t Stop Checking Him Out

Steelersforever.org