fbpx

Is AI an existential threat to humanity?

Published by Editorial Staff on


Hmmm!
It’s more complicated than you may think.


For most people it’s a straightforward ‘YES’ or ‘NO’. But the facts are, questions as complicated and controversial as this, are neither straightforward nor are their answers.


There are two things one need to know and understand first, before assuming that Artificial Intelligence is an existential threat to humanity or not a threat at all.


These two things are ‘Artificial Intelligence’ and ‘Existential Threat’.
Now, before we draw a line of conclusion, let’s walk through these two terms (Artificial Intelligence and existential threat).

Existential Threat


An existential threat is any threat that has the potential to eliminate all of humanity or, at the very least, kill large part of the global population, leaving the survivors without sufficient means to rebuild society to current standards of living.


This threat can either be anthropogenic (caused by humans) or non-anthropogenic (caused by external forces). So far, many things have been identified as existential threats to humanity, some of which are:

  • A major asteroid hitting Earth
  • A supervolcanic erruption
  • Extreme climate change
  • Nuclear war resulting to Nuclear winter
  • Natural or Genetically engineered pandemic


By the very nature of existential threats, they are anything that has the capacity to wipe out humanity and current civilization, no matter how little the odds of them happening are.


According to an article in the Washington Post, the odds of a global pandemic wiping out humanity is at 0.0001%. This shows how low the odds of the occurrence of supposed existential threats are.

Artificial Intelligence


*For an in-depth introduction to Artificial Intelligence, read this article.


People often forget that Artificial Intelligence is Artificial. The Artificial, is usually a simulation of the Natural, where the natural is the original and the Artificial an imitation. Though, the Artificial can be better than the Natural in specific qualities.


Artificial Intelligence is attempts to stimulate human Intelligence and that’s what it has been from the very beginning. It’s pretty much like augmented reality, an area where we attempt to create fake realities that looks as real as real. But, in spite of this, an imitation is an imitation and a stimulation a stimulation.

Artificial Intelligence are of different types and there are three types of it, namely;

1. Narrow Artificial Intelligence
Narrow AI (ANI) is defined as a specific type of artificial intelligence that is focused on one narrow task.
There are many examples of narrow AI and they include;

  • Self-driving cars
  • Facial recognition tools
  • Google’s page-ranking technology
  • Recommendation systems
  • Spam filters

Narrow AI is the only form of Artificial Intelligence that humanity has achieved so far. It’s simply the AI we know.

2. General Artificial Intelligence
General AI (AGI) is a hypothetical type of Artificial Intelligence with the ability to apply intelligence to any problem, rather than just one specific problem, sometimes considered to require consciousness, sentience and mind.


Some people refer to AGI as a human-level Intelligence. Since, we are yet to develop an AGI, there are no real examples of them except in science fictions like R2-D2 in “Star Wars” or Jarvis in “Iron Man” .

3. Super Artificial Intelligence
A superintelligence is a hypothetical type of Artificial Intelligence whose ability to apply Intelligence to any problem greatly surpasses that of humans.
They are hypothesized to have intelligence far surpassing that of the brightest and most gifted human minds.
Because of their potentials, super AI is the type of Artificial Intelligence that is mostly considered to pose an existential threat to humanity. And many people, including prominent figures in the AI Industry, express fear of Artificial Intelligence taking over the world.

What AI Thought Leaders are Saying

1. Fei-Fei Li


Fei-Fei Li is a Professor of Computer Science at Stanford University, Chief Technologist at Google, and Co-Director of Stanford University’s Human-Centered AI Institute and the Stanford Vision and Learning Lab. Her specialty is computer vision and cognitive neuroscience.


Here is what she has to say:


“Every technology is a double-edged sword. So when humans discovered fire, it changed the lifestyle of the prehistoric humans. But it also can be dangerous. And every step of the way in our civilization, we’ve seen technology playing both very positive roles, as well as creating or introducing perils. And A.I. has that.”
Li said to CNBC.


When ask if machines could one day be running amok on our planet?


“I still believe the world is created by us,”
she replied. “And whatever future world we envision or we want to live in, is due to the work we do today. So, if we focus on human-centered A.I., human-centered technology, I hope that the future we create is human centered and it’s benevolent.”

2. Andrew Ng


Andrew Ng’s contribution to artificial intelligence, machine learning, deep learning, and robotics is hard to underestimate. A global leader in AI research, he’s doing his best to not only propel but also democratize AI by offering a few easy-to-grasp courses on Coursera.


He gave his honest opinion on Quora answering the question “Is AI an existential threat to humanity?“. And the first paragraph summed up everything.


“Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars.”
Andrew wrote.

3. Elon Musk


Elon Musk is a technology entrepreneur, investor, and engineer. He is the founder, CEO, and lead designer of SpaceX; co-founder, CEO, and product architect of Tesla, Inc. and founder of The Boring Company.


He is one the most outspoken AI thought Leaders. All of his companies deploy Artificial Intelligence technologies one way or another. Yet, Musk is one of the few AI thought Leaders who raises alarm of the existential threat posed by Artificial Intelligence.


“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,”
said Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”


“So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one.”


To do this, Musk recommended the development of artificial intelligence be regulated.


“I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public,”
Musk said.


“It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want. That would be insane,”
he said at SXSW.

My Conclusion


A major asteroid hitting the Earth, a nuclear war, or a global pandemic are an existential threats to humanity, but that doesn’t mean any of these things are inevitable.

Hence, Artificial Intelligence is an existential threat to humanity, but that doesn’t mean the possibility of Artificial Intelligence destroying humanity is inevitable.

There is one reason why people like Elon Musk stress that Artificial Intelligence is dangerous, and yet, they keep deploying it’s usage and funding it’s development.
And the one reason is, they think we can manage the risk posed by Artificial Intelligence, especially through regulations.

Come to think of it, Nuclear weapons were and are still a threat to humanity, but we are currently managing it through regulations.
And we are thousands of years away from developing a super Artificial Intelligence, come on, that’s a lot of generations ahead.

Therefore, If you are worried about AI destroying humanity, I borrow the words of Andrew Ng, let’s inhabit Mars first before you start worrying about overpopulation in Mars.

Pinterest Pin


Editorial Staff

We help people like you invest successfully in AI, build AI startups, secure jobs and pursue amazing careers in the AI Industry.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *