This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology

Jul. 6, 2023

Altered intelligence: will it make us better, worse or extinct?

It has been said the most nightmarish scenario one can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge.

A. Marco Turk

Emeritus Professor, CSU Dominguez Hills

Email: amarcoturk.commentary@gmail.com

A. Marco Turk is a contributing writer, professor emeritus and former director of the Negotiation, Conflict Resolution and Peacebuilding program at CSU Dominguez Hills, and currently adjunct professor of law, Straus Institute for Dispute Resolution, Pepperdine University Caruso School of Law.

While it is not an everyday warning concerning the necessity to mitigate the dangers of AI (artificial intelligence) that could cause human extinction, there is an effort now to talk about the urgent risks surrounding it.

In a one-sentence statement released by the Center for AI Safety (CAIS) - a San Francisco-based nonprofit research and field-building nonprofit organization whose mission is to "promote reliability and safety in artificial intelligence through technical research and advocacy of machine learning safety in the research community," - the following wakeup call was issued:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale dangers such as pandemics and nuclear war. The enormous promise of the opportunity also comes with exposure to substantial harm."

The CAIS goal is to encourage "AI experts, journalists, policymakers and the public to talk more about urgent risks relating to artificial intelligence." Among the 350 signatories of the statement providing the wake-up call are executives from the top four AI firms."

Because "AI safety's concepts are still nebulous, imprecise, and ill-defined," this is a particularly important problem for philosophers since it is "a task that (they) are particularly fit to handle" in the effort to "further clarify the concepts underpinning AI safety."

According to researchers at the University of Sheffield, it seems like every day we are being bombarded with AI. One of the latest efforts offered to calm the "nervous nellies" among us is that AI will not likely reach "human-like levels" if its components remain disembodied and only appear on computer screens.

Jane Rosenzweig, director of the Writing Center at Harvard College and the author of the Writing Hacks newsletter, cautions that while "assistants" are there "ready to do the writing for us" we need to be vigilant because "thinking could be next." From all indications it may be that the "thinking part" may have already commenced.

So, apparently our first line of defense is being constantly on the lookout for "embodied" entities that seek opportunities to further expand as they appear on other than computer screens. One cause encourages creating and maintaining the effort to restrict embodied conduct concomitantly flashing across boxed screens.

A robot that describes her "nightmare causing excitement along the way" notes there are those criminals who use AI to create fake sexual images for blackmail, and those who will "screw over" young people in the process. And then there are others who use AI to "take over" elections as they attempt to undermine our democracy.

How can this happen, you ask? Well, the potential for AI to "wipe out mankind" was predicted as far back as 1818. Remember that people formerly were required to figure out problems on their own, whereas today we are blessed/cursed with the ability to resolve them in milliseconds simply by using AI. This has raised the fear that machines will take over and change the world as we know it.

Not to be forgotten is AI-created "Hal" in 2001 Space Odyssey, the sophisticated computer system which the astronauts needed to break free from before he/it could destroy them. Fiction then but not so outlandish today.

More and more inquiries are devoting their efforts to answering the question: "What to do about AI before it destroys us?" Yes, AI really could put an end to the world - but not as we might expect. The Doomsday Clock is estimated to be closer to reaching Armageddon than ever before. Perhaps it already has "struck twelve" as it threatens humanity.

There is the female robot character (Ameca) created to describe a nightmare AI scenario. Reputed by her designers to be the "world's most advanced female humanoid," she was asked to describe her scariest AI scenario. This is what she was reported to have said at a recent symposium in London, shocking observers:

It has been said the most nightmarish scenario one can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge. At a recent unveiling of a robotic future in London, a highly developed robot (Ameca) with an eerily lifelike look of concern on her face unabashedly declared: "This could lead to an oppressive society where the rights of individuals are no longer respected."

Do we need any more proof before experiencing the shivers up and down the spine of even the most "real world" experienced scientific one of us as we are faced with the scary possibility of runaway intelligence?

This, without a doubt, sounded the alarm over runaway intelligence. Currently, experts and tech bosses have "put the threat of AI on a par with that of a potentially apocalyptic disaster." Never mind the mechanical assurances of a robotic future that "there is nothing to fear except fear itself."

Recently, as weird as it sounds, this "world's most advanced humanoid" tried without success to dispel fears of a "robotic takeover." Unbeknownst to most of us, a concentrated effort is being made to examine the world's most realistic robots and their potential use for the future. This is a race to replace ourselves with mechanical replicas that have no human defects.

While it has been tough to keep up with the pace of development in an AI world, there's a sense of déjà vu, like the digital revolution, and it's led to a generational divide. The resistance is not just from boomers and millennials. Tech leaders are also worried. I am sure I am not alone in worrying that the situation is getting out of control. Open AI CEO Sam Altman testified before Congress and called for AI regulation. Other leaders in the industry have also acknowledged that AI could "cause significant harm to the world...once going wrong it could go quite wrong!"

Bottom Line: AI developers acknowledge "risk of extinction" to humans at some point in our future. This is a "wakeup call." Lest we forget or ignore the potential significant harm to the world, we need to be painfully aware that the only question that remains is as to when and how we will meet this fate if we fail to act quickly, efficiently, and effectively.

What I have said here is uncharacteristic of my customary positive outlook. I confess that of all the columns I have written over 14 years for this newspaper, the "beyond H.G. Wells" nature of this one leaves me with unshakable chills, and is indeed an ominous alert for our 21st century.

There will be no "second chances" for AI mistakes.

#373723


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com