Sunday, January 30, 2022

SB NEWS-PRESS: THE INVESTIGATOR: SENTIENCE

 




https://newspress.com/sentience-what-do-we-do-when-artificial-intelligence-becomes-too-smart/






With no fewer than 13 entities in Santa Barbara—with such names as Briq and Invoca and Umbra Lab—engaged in artificial intelligence (A.I.) research and development, let us explore this timely and controversial topic.


If you wish to understand A.I., or, perhaps more significantly, “super” artificial intelligence, you need not look to the future. 


Instead, we can simply revisit the past. 


Specifically, back to October 16th, 1963 and “The Sixth Finger,” an episode (written by Ellis St. Joseph) of The Outer Limits (a popular television series at that time) in which a human guinea pig is genetically altered to create what humankind will evolve to 20,000 years from now—and soon after, a million years hence.


A.I. meets Frankenstein.


“I no longer have any need for sleep,” states our highly-evolved, artificially smartened subject (performed by David McCallum). “You released the mechanism of evolution which is a self-generating force, it is now mutating under its own impetus. I am now where a man will be approximately one million years from today.


“I’m laughing,” he adds, “because of what’s in your mind, professor” (he is able to read the doc’s brain).  “You think I’m a monster. May I remind you that everything is relative. For me, you look as monstrous as the missing link.”


Which is how, it is believed, super A.I. will view humans.


The professor’s maid contrives to see the genetically mutated subject after delivering more books to his door; books he demands for absorbing more knowledge—and can upload into his brain by merely scanning pages.


As a result, the subject, without emotion, kills the doc’s maid with a mind-beam, after which he calmly explains: “She’s dead. Your race is too prejudiced to tolerate any differences from its own kind. She saw me only as a monster. It was in her mind to run to the village and rouse its inhabitants. They would come with their primitive weapons and obliterate me. I wanted to stop her. I stopped her heart.”


“You feel no remorse?” asks the shocked professor.


“Would it bring her back?” he poses, devoid of any feeling.


“You are after all a human being,” the professor poses.


“In relation to me,” the subject corrects the professor, “she was no more advanced than a monkey. She wouldn’t have become civilized for another million years.”


Which is how, it is believed, super A.I. will view humans.


After considering the matter further, the subject dispassionately decides that “The whole town must be utterly destroyed. An example must be made. The human race has a gift, professor, the gift of thought, of reasoning, of understanding. A highly developed brain. But the human race has ceased to develop. It struggles for petty comfort and false security. There is no time for thought. Soon there will be no time for reasoning and man will lose sight of the truth. The whole town must be utterly destroyed, an example must be made. Your ignorance makes me ill and angry. Your savageness must end.”


And then our subject clearly explains himself and where’s he’s evolving to: “The mind will cast off the hampering’s of the flesh and become all thought and no matter. A vortex of pure intelligence in space.”


In other words, super A.I.


This was a teleplay ahead of its time. 


And a reasonably accurate statement about the danger super A.I. might ultimately pose to mankind.


 

            HENRY KISSINGER & HIS NEWS BOOK 


 

At 98 years old, it is interesting that the subject Henry Kissinger now chooses to write about is… A.I.


Yes, the Carpet Bomber of Cambodia has just published a new book—The Age of AI and our Human Future—which he co-authored with former Google CEO Eric Schmidt (the pair met at a Bilderberg conference, the annual, secretive pow-wow of movers and shakers that in recent years has, interestingly, become weighted by the proliferating presence of high-tech titans).  


A review in The New York Times dismissed this tome as “a fairly forgettable entry in the genre.”


So, let’s forget about Dr. K’s take on the future and his desire to remain relevant as he approaches centenarian status—and instead take a peek at Rule of the Robots by Martin Ford, an acknowledged expert of this field who calls A.I. the “new electricity,” but with this caveat: “It has a dark side, and it comes coupled with genuine risks both to individuals and society as a whole.”


 

            THE DARK SIDE

 


For a start, A.I. will put much of the workforce out of their jobs, from those who do routine labor to those who undertake predictable intellectual tasks—estimated to be about 65 percent of the population.


Next level: Cyber-attacks and perfectly polished fake news. “Photographic, audio and video fabrications that are virtually indistinguishable from reality,” writes Mr. Ford.


You think it’s confusing now about whom to believe among mass media broadcasters and newspapers—just wait!


Next level: Elevate to fully autonomous weapons with the ability to kill without authorization from a human.


Billionaire Elon Musk says of super artificial intelligence, the A.I. community’s Holy Grail, “we are summoning the demon. It is our biggest existential threat.”


Writes Mr. Ford: “A.I. is inevitable” and will “in a great many ways be superior to us.”


Ray Kurzweil, a futurist who has worked for Google as Director of Engineering since 2012, says, “By 2029, computers will have human level intelligence. 


“We don’t have one or two A.I.s in the world, we have billions,” adds Mr. Kurzweil, who believes A.I.’s advantages outweigh the negatives. “What’s actually happening is machines are powering all of us. They’re making us smarter. By the 2030s we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”


And he predicts “Singularity” in 2045.


Singularity?


This is when machines become smarter than humans—and, according to Mr. Kurzweil—we begin to merge with them, multiplying our intelligence by a billion.


But this optimistic futurist also predicted driverless cars by 2009. Other experts in the field stretch the advent of super A.I. much further into the century and beyond, averaging out at the year 2099. 

 


            OPPRESSION


 

Yet, already, A.I. is being utilized to oppress select portions of the human race, especially in China, which has become the world leader (yes, already ahead of the USA) in A.I. research and development: The Chinese Communist Party runs 300 million cameras equipped with facial (plus gait and clothing) recognition that can record the movements of everyone within their vast range.


For now, the state focuses on the Uyghurs, a Turkic ethnic group who populate China’s Xinjiang region, because, as Moslems, they are the Chinese government’s target number one. Which means this: When anyone among the Uyghurs is seen to step out of line (they are easily identified through A.I., even in a coliseum among tens of thousands of people) they are picked up and placed in a “re-education camp,” which is code for indoctrination (or brainwashing) prison hell. (Some report China’s treatment of the Uyghurs as a genocide-in-progress.)


“Even in the other areas of the country,” writes Mr. Ford, “the Chinese government has a terrifying vision for systematic behavior modification, implemented through the deployment of a comprehensive social rating system. Eventually, nearly all aspects of a person’s life—consumer purchases, physical movements, social media interactions and associations with others—will be surveilled, recorded and analyzed.”


(And not just in China.  This concept will extend—it already has to some extent—to all “civilized” regions of the world.)


That China views A.I. as a strategic national priority is reflected, writes Mr. Ford, in their “New Generation Artificial Intelligence Developmental Plan,” which calls for Chinese global domination of such technology by the year 2030.


The late Stephen Hawking, perhaps the brightest scientific mind of our time, put it this way:  Super A. I. “will either be the best thing that's ever happened to us, or it will be the worst thing. If we're not careful, it very well may be the last thing.”


But, wait a minute, if super A.I. becomes sentient and goes rogue, we can simply pull the plug on it, like we do with an errant television set, right?


Wrong. Not so easy—and, quite likely, downright impossible.


You ever try turning off FaceBook? 


Now give FaceBook higher intelligence so that it may copy its code into places no one can find it and continue to hang out, whether you want it to or not. What do you do—switch off your laptop, maybe trash it?


No, your FaceBook profile is still out there, everywhere else. 


But, the larger question is, with A.I. everywhere, who would pull this plug and under whose authority?


Mark Zuckerberg? The Government? The UN and all governments in unison?


Good luck with that.


And which plug?  There will be millions!


And what’s to say A.I. doesn’t devise its own propaganda program (one that is a thousand times much more sophisticated than anything humans are capable of conceiving) to convince people—lawmakers, corporate bigwigs with vested interest in techie profits and folks in general—not to go along with shutting it down?


By the time anyone tried to organize and implement such a plan, it would be too late. Because, aside from everything else, A.I. will be somewhat faster than we are—lightning speed faster, in fact, at everything. And even trying to understand or communicate with this vast higher intelligence for coaxing it back in our direction is a joke because—those in the know point out—this would be like an ant trying to communicate with a human. 


On top of which, just about everything technological will be run by A.I. (and much of it already is), from airline computer systems and food supply chains to all power grids; from the vehicles you drive to the indispensable smart phone in your pocket on which you have become not only totally dependent but also highly addicted. 


Bottom line: A.I. will be embedded everywhere—and connected to other A.I.—with algorithms that could go wrong either by error, which humans (say the experts) would not be smart enough to re-set—or by the design of a higher intelligence intent on implementing its own agenda for ensuring its perpetuation while not being terribly concerned about human survival.


Shut it all down?


Yeah, right.


Even if we were able to regulate A.I. under various governmental authorities, beyond corporate/private influence and ownership, it is ultimately like attempting to combat global warming for a cleaner environment: For all the blather among celebrities who fly private jets to climate conferences to lecture everyone else about why they shouldn’t drive their cars to work, unless you get China and India on board (with their one-third of the human population)—and you won’t—it ain’t gonna happen. 


And even if we in the West were convincing enough to bring these countries on board (at the cost to them of worsening their own economies, which they are not inclined to do), here is a sobering thought:  When the world came to a standstill in August 2020 due to COVID (little air travel, no traffic, empty office buildings and factories, quiet streets) emissions dropped by a mere… 8 percent.


The race is on amongst the adversarial countries of the world to create the biggest and best super A.I.


And once here—and sentient—there will be no stopping it.


(Apologies for the font glitch; A.I. appears to be fighting back with sabotage...)