Intelligence

Is Future Artificial Intelligence Going to be Good or Bad for You?

0 0
Read Time:7 Minute, 26 Second

Something’s going on in the technology world that feels a lot like it did in the late 1990s when everything converged on one massive paradigm shift: the Internet. The new thunderbolt hitting is widespread news, technological progress, and concern about the growing pervasiveness of artificial intelligence in our daily lives and how it’s affecting us now and will in the next several years.

 

My biggest concern with AI is not how it will impact my working life because I am relatively close to retirement. But I do think often about how AI will affect the lives of my children, society, and humanity overall over the next couple of decades.

 

What skills will people now in their 20s and 30s need to develop that AI will not be capable of doing much faster, easier, and more accurately? It’s not clear now. This will be a major question for this generation of workers to answer and take action on to make sure they can still add value to the world beyond all the super quick pattern recognition and other information processing AI will be able to do; and in some cases can already do now using generative AI technology that, for instance, can send you a well-written essay on the topic you request within seconds.

 

This tidal wave of technological change and workforce upheaval is crashing down on the world now and this will continue.

 

Experts are warning this wave will cause widespread job losses in many different industries, and it’s not clear now how, or even if, we’re ready to cope or have any kind of action plan to make sure peoples’ careers and financial security aren’t completely upended.

 

To learn more about this era of AI we’re now living through, I read a book titled The Age of AI: And Our Human Future. Three heavyweights collaborated to write this book about the real concerns we should have about a world powered more and more by AI: Henry Kissinger, former Secretary of State of the United States and Assistant to the President for National Security Affairs; Eric Schmidt, technologist, entrepreneur, and former Google CEO; and Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing.

 

Three passages from the book resonated with me as important for all of us to consider. The first read as follows:

 

Just as parents a generation ago limited television time and parents today limit screen time, parents in the future may limit AI time. But those who want to push their children to succeed, or who lack the inclination or ability to replace AI with a human parent or tutor – or who simply want to satisfy their children’s desire to have AI friends – may sanction AI companionship for their children. So children – learning, evolving, impressionable – may form their impressions of the word in dialogue with AIs.

 

This seems inevitable to me. Many parents will want their children to remain on the leading edge of technological progress to prepare them for high school, college, and the workforce. There won’t be much sense in trying to keep your kids from engaging with more AI more of the time because their friends will be doing it and it will be like smartphones. AI will be more ubiquitous. Children will fall out of the conversation and social groups if they don’t engage with AI.

 

But I don’t think this is all bad. I have faith that parents will be smart enough to, in most cases, monitor how their kids engage with AI and will, for the most part, keep them safe from the most insidious and dangerous types of AI that may emerge.

 

This doesn’t mean kids won’t be influenced by their increasing interactions with AI. They may get psychologically addicted, as many are to their smartphones (along with adults), but there will be limits to how much they will be allowed to lose themselves in the world of AI because their parents stepping in to avoid overindulgence. I maintain faith in the judgment of parents to love their kids enough to prevent some sort of AI-related catastrophe.

 

The book also points out how AI will increasingly reinforce bias.

 

AI will scan deep patterns and disclose new objective facts – medical diagnoses, early signs of industrial or environmental disasters, and looming security threats. Yet in the worlds of media, politics, discourse, and entertainment, AI will reshape information to confirm our preferences – potentially confirming and deepening biases and, in so doing, narrowing access to and agreement upon objective truth. In the age of AI, then, human reason will find itself both augmented and diminished.

 

We already are struggling with a problem trusting information that we see on our Twitter feeds, smartphones, and TV sets. What is true is becoming more elusive. What someone says has to be filtered through a lens of “Who pays this person to say this?” “What’s this person’s political agenda?” “Are they saying this controversial thing just to get more Followers and Likes?” We see this on cable news channels every night.

 

AI is more likely to keep feeding us information that its pattern recognition reveals we agree with. This isn’t good. We need to be exposed to different points of view, not just what we believe, to reason with more discernment and objectivity.

 

It will be important for AI developers to constantly extricate bias from their algorithms so we have a society of citizens that receive more truthful information upon which to evaluate issues, and a set of facts to make more sound and less emotional and narrow-minded decisions.

 

The book also tackles one of the most cosmic issues we need to consider about AI: how they will be used by nations and individuals to either instigate or thwart major military attacks either on foot or through cyberspace.

 

The unresolved challenge of the nuclear age was that humanity developed a technology for which strategists could find no viable operational doctrine. The dilemma of the AI age will be different: its defining technology will be widely acquired, mastered, and employed. The achievement of mutual strategic restraint – or even achieving a common definition of restraint – will be more difficult than ever before, both conceptually and practically…Unlike nuclear weapons, AIs are hard to track: once trained, they may be copied easily and run on relatively small machines. And detecting their presence or verifying their absence is difficult or impossible with present technology.

 

This is unsettling because it’s true. We don’t know what people or countries are right now developing AI to plan military attacks on other countries or regions. These attacks could exist in malicious software code hidden within a corporate server or military router or a person’s smartphone – and it would be much harder to find this code. It’s often invisible or buried amid mountains of other software code.

 

This is worrisome.

 

Just like many other truths about AI. But we must accept AI is upon us. It’s getting smarter. We don’t know how smart. We don’t know how it thinks and figures out some problems. It seems to be thinking on its own. It seems to have a mind of its own.

 

I know this may sound ominous but I mean for it to be more educational and cautionary. Better to share these insights now than stay quiet and risk AI taking over our lives more than we have ever imagined.

 

The most uplifting development I’ve heard about this AI wave is that many of the people leading the rollout of generative AI appear to be genuinely concerned about the risks this technology could have to our political elections and overall ability to control it rather than AI controlling us.

 

I’m not naïve enough to think they’re purely interested in the greater good and protecting all of us. They want to make money from this technology. But they’re saying they’re concerned and showing, at least for now, a measured approach to rolling out ever-smarter generative AI technologies.

 

Elon Musk strikes me as the one thinking the most soberly and correctly about this AI onslaught. He doesn’t think we should create something that’s so smart that we lose control of how it functions and how it interacts with and treats people.

 

We need to be careful with AI. Slowing the pace is a wise idea, but it’s also a reality that for business and competitive reasons geopolitically, AI cannot be stopped.

 

I think we need to be careful here. And have faith in people to make the right decisions for the security of all of us now and in generations to come.

Sammy Sportface

About Post Author

Sammy Sportface

Sammy Sportface, a sports blogger, galvanizes, inspires, and amuses The Baby Boomer Brotherhood. And you can learn about his vision and join this group's Facebook page here: Sammy Sportface Has a Vision -- Check It Out Sammy Sportface -- The Baby Boomer Brotherhood Blog -- Facebook Page
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Author Profile

Sammy Sportface
Sammy Sportface
Sammy Sportface, a sports blogger, galvanizes, inspires, and amuses The Baby Boomer Brotherhood. And you can learn about his vision and join this group's Facebook page here:

Sammy Sportface Has a Vision -- Check It Out

Sammy Sportface -- The Baby Boomer Brotherhood Blog -- Facebook Page

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

Your email address will not be published. Required fields are marked *