Can Humans Trust Machine Superintelligence?

Olivia Shearer
5 min readNov 23, 2020
The sentient computer, HAL 9000, in Kubrick’s ‘2001: A Space Odyssey’ [Photo by Warner Bros. on NY Times]

Among the forefront of emerging technologies is the premise of superintelligent artificial intelligence (AI), an agent that becomes self-aware and surpasses the capacity of any human mind. Although AI is still far off from autonomous superintelligent machines such as the fictional HAL in 2001: A Space Odyssey or Ex Machina’s AVA, the premise of anthropomorphic machine learning is being sought after, and this brings about an array of risks. Rather than focusing on the threat of hostile autonomous robots destroying humanity, we must instead focus on what advances in super intelligent AI will mean for our current societies, with a focus on social manipulation, automation, and fatal system failures.

Nick Bostrom, in his book Superintelligence: Paths, Dangers, Strategies, argues that AI offers a more affluent, safer, and smarter world, yet humanity completely lacks the resources to be able to shift from a world where humans dominate to a world where superintelligent machines hold intellectual authority. The ultimate limit to information processing in machine learning lay far outside the boundaries of biological tissue. In this century, scientists may awaken the power of AI, and Bostrom argues that we could then end up with an intelligence explosion. There are no limitations to AI- it may have the capacity to overtake human brainpower, which will grow to mean an ultimate lack of control for humans.

Of course, I must give some credit to AI and the premise of superintelligence. We are already seeing how AI can dramatically improve efficiency in workplaces, as well as helping to eradicate human errors in technological systems. It would be wrong to say that superintelligent AI would not bring about many more benefits.

What’s more, this hypothetical development may be just that; a scientific breakthrough with many economic and social advantages. Many scientists believe superintelligent AI will never surpass their human creators' intelligence. If they do, they will work per our values. Yet, scientists are growing continuously worried about superintelligence, and this comes with good reason.

What is the future for humans in an advancing technological age?

Despite Hanson’s Sophia and robots alike offering the occasional remark about world domination, we’re still a far way off any real threats from HAL-like autonomous machines at the present day. This doesn’t mean we’re not on the way. Researchers such as Eliezer Yudkowsky suggest that this will start with a “seed AI.” This has been completed with systems such as AlphaGo, which beat the Go world champion four games to five, prompting him to retire. Yudkowsky and researchers alike believe this “seed” will quickly grow to human intelligence level, and then well beyond.

Autonomous systems are already dramatically reducing the need for human involvement. Although digital “agents” such as Siri or Alexa aren’t anything close to reaching human-level intelligence, they are quickly becoming more intelligent through machine learning. Code-based tools in technology systems are fast evolving to master human traits such as reasoning and logic, which poses a severe threat to human autonomy and labour. Professor Thad Hall notes that the problematic aspects to come from AI threatens economic uncertainty, employment issues, and privacy.

A Note On Control

An impending autonomous technological revolution poses serious disadvantages when faced with the risk of placing lives into the hands of a machine. Jasanoff examples the U.S. space shuttles, Challenger in 1986 and Columbia in 2003 to illustrate the extent of technological risk through human mismanagement. The shuttles each cost the lives of seven crew members due to uncorrected design defects. Jasanoff explains, “[the disasters] illustrate several features of risk analysis that matter profoundly for the ethical… governance of technology”. Although it seems easy to take a constructivist view on superintelligence in that we cannot predict its outcomes, what we can use to determine its risk is the already fatal failings of existing AI thus far.

At a TED talk in 2017, Peter Haas talks about the risk of AI being treated as trusted colleagues; the main worry isn’t that AI makes mistakes; it’s how fatal these mistakes can be. Small scale AI failures such as Microsoft’s chatbot, which was swiftly corrupted by Twitter users to the point where it started to churn out racist and misogynistic tweets is an example of AI failings when left in the hands of the general public. On a far more lethal scale is AI failings regarding social manipulation through racial bias, and Uber’s test of a self-driving car in Arizona which ended fatally when the car didn’t detect 49-year old Elaine Herzberg as she was crossing the road.

AI in its current form has some significant failings which cannot be predicted. When autonomous AI is inevitably created, it could eventually develop things that are, at the moment, science fiction. It would also be able to get what it wants; we would then face a future shaped by the preferences of this AI, and there is no telling what kind of fatal errors it could create. When superintelligence is created, we won’t be able to turn it off, particularly not when we’ve grown dependant on it. What’s more, superintelligent AI is unlike any AI that we’ve seen so far in that it replaces human comprehension. As Eliot Lear from Cisco Systems puts, “AI and tech will not leave people better off than they are today… technology outpaces our ability to understand its ramifications to govern its use properly”.

Regulation

We should never be confident that we have AI under control, especially regarding the potential of autonomous AI. Making superintelligent AI is an extremely difficult challenge, but not nearly as difficult as making it safe. Bostrom states that we need to work out a solution to the problem of control in autonomous AI in advance. When it is inevitably created, we have faith that a transition into the machine intelligence era will go well.

Calls for AI development regulation have come from many tech moguls such as Elon Musk, who argues “AI is a fundamental risk for human civilisation.” However, such proposals have been met with criticism, stating that AI and robotics are still in their infancy, and therefore too early to regulate. With failing such as racial bias and fatalities from self-driving cars, it seems clear that any AI- let alone a superintelligence- needs regulating to prevent costing human lives through inevitable mistakes.

Bernard Marr suggests that if our governments and business institutions don’t spend time now formulating rules, regulations, and responsibilities, there could be significant negative ramifications as AI continues to mature. Failure to respond to these dangers will most certainly end in, at least, a severe lack of autonomy. At most, humans, as Stephen Hawking rightly presumed, “… couldn’t compete, and would be superseded.”

--

--