fbpx

Introduction to Artificial Intelligence

Published by Editorial Staff on

Approximately 4-5 billion years ago the sun started producing energy and 10 million years ago humans made their first tools from woods, stones and bones. Then we discovered fire, we made our first clothes and the bronze age came along.

We saw the Iron Age, invented wheels, and discovered electricity. Not quite long computers appeared on the picture and we landed on the moon. From there onwards the sky was no longer the limit.


Landing on the moon opened us to a world of endless possibilities and boy, we did not stop.

Since Ancient Times, we had dreamed, talked and tried to make things that are intelligent like we are. We called it Artificial Intelligence.
We did it, yes we did it and Artificial Intelligence is somewhat a reality.

Yes, somewhat a reality. Because we are still a long way from developing true Artificial Intelligence.

Learning Artificial Intelligence has proven to be an uphill task, not to talk of developing it.

This article is focused on introducing you to the Artificial Intelligence journey so far.

So shall we?

What is Artificial Intelligence (AI)?

Knowing what artificial intelligence is, is quite simple. Just know what intelligence is and understand what it means for something to be artificial.
Intelligence is the ability to be aware of or infer information, apply the information gained as knowledge in order to adapt to new situations.

It also includes having the capacity for logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking and problem solving.


And to be Artificial simply means to be man-made.
So that’s it.

Artificial Intelligence is the ability for something made by man to be intelligent.
Therefore, anything made by man that has the ability to be aware of or infer information, apply the information gained as knowledge in order to adapt to new situations; is demonstrating Artificial Intelligence.

Artificial Intelligence is so divisive, having it’s advantages and disadvantages. Yes, it’s promising and am personally excited about its potential.

History of AI

It all began in antiquity.
In those days, philosophers and their likes pondered with the idea that there could have been or there will be artificial beings. They thought about human reasoning as a system that could be automated and exploited by intelligent artificial beings.
Classical philosophers prepared the site for the development of Artificial Intelligence.

Myths and Fictions

There are so many myths surrounding artificial intelligence since antiquity. The idea of artificial intelligence was expressed in Greek, Jewish, and uneven Chinese myths.
In Greek Myths, Hephaestus, the god of technology, was said to have created Talos.

Talos was supposedly a giant bronze automation, that was created to protect Crete from Pirates and Invaders.
There was Galatea, Pygmalion’s ivory scrulpture brought to life and Yueying Huang’s wooden dogs, among others.

Myths played its part but then steps in science fiction in the 19th century.
In science fiction it began with Samuel Butler and his novel “Erewhon” and remember the heartless Tin Man from The Wizard of Oz.
Beyond classical philosophers, myths and fiction played a parts in popularising artificial intelligence and hence provoking curiosity to make it a reality.

Can Machines Think?

Classical philosophers, myths and fictions paraded the probability and possibility of an artificial intelligence for centuries. And in the mid-20th century scientists took charge of the parade. Notably, it began with Alan Turing’s paper.

Alan Turing, the famous World War II code breaker and founder of computer science, in 1950 published a paper titled “Computing machinery and intelligence“. In this paper he theorised on the possibility of creating machines that can think.

Sculpture of Alan Turing in Manchester, UK

He confronted the question, “Can Machines Think?”. And tried to simplify this question by rephrasing it to “can machines fool humans into thinking they are humans?”.
In the paper he proposed that If machines can do this (fool humans into thinking they are humans), then in that way they can think. With that, he devised the Turing Test.

The Turing Test which he termed “The Imitation Game”, is a test (or a game) which Turing proposed to determine if a machine can think.

The principle of the test is that, if a machine can carry on a conversation (over a teleprinter) with a human being and is able to fool the human being into believing it is human, then one can reasonably say, that the machine was thinking.

The paper went on to become a landmark paper in the field of artificial intelligence and Turing went on to be one of the Founding Fathers of Artificial Intelligence.

Ferranti Mark 1

In 1951, the Ferranti Mark 1, also known as the Manchester Ferranti became the first computer made for an artificial intelligence program. A checkers and chess program was made using the Ferranti Mark 1.

The initial program had its limitations, for instance, it took an average of 15 to 20 minutes before it makes a moves. This is because the program has to examine thousands of possible moves until a solution was found, at a very slow speed.
And the program couldn’t distinguish between checkmate and stalemate, among other limitations.

But eventually it achieve sufficient skills to challenge a respectable amateur.
Since Ferranti Mark 1, Game AI has being used to measure the progress of Artificial Intelligence throughout history.

The Dartmouth Conference: Official Birth of AI

The Dartmouth conference was a brainstorming conference organised by John McCarthy in 1956. It was approximately an eight weeks conceptualization session by top researchers from various field. The primary objective of the conference was to initiate the study of Artificial Intelligence.

The conference was not entirely a success, as there was failure to agree on standard methods for the newly formed field. But nonetheless, the AI foundation was laid and the field of Artificial Intelligence was established.

Ground Breaking

After the conference, from 1957-1974, there was groundbreaking progress in the field of Artificial Intelligence. Astonishing programs were developed, computers were learning to speak English, they were solving algebra word problems and proving theorems in geometry.
During this period computers became faster, cheaper and simply more effective. And Governments were heavily funding the AI research.

In Japan, WABOT project was initiated in 1967 and was completed in 1972. This project created the world’s first intelligent humaniod robot.

With the level and speed of progress, people became very optimistic and expectations for Artificial Intelligence became too high.
So great was the optimism that in 1970, Marvin Minsky boasted that “from 3-8 years, we will have a machine with the general intelligence of an average human being“. And girl we didn’t.

Then came the 1st Winter.

Winter Came

Winter Came

AI researchers underestimated the complexity and difficulty in the development of artificial intelligence. Development in the field of AI was not as fast as expected. As such AI research suffered a major setback.

In the 1970s, the AI research became a subject of numerous critiques. Partly because AI researchers raised expectations so high and they failed to met up to them. People ran out of patience and so funding dwindled.

With a shortage of funding, research rolled very slow. And the period that followed is known as the AI Winter.
The winter was mark by shortage of funding, slow and insignificant progress in AI research and development.

Breaking New Grounds and the 2nd Winter

In 1980s, money came back into the AI research. The new cashflow was provoked by the advent of “expert systems” and subsequent Japanese funding.

Expert systems, a program that solves problems on a specific field of knowledge, using logical rules became widely used in industries and corporations around the world.

In 1981, the Japanese government aggressively funded research in AI, to ensure further development of and with the expert systems. Other countries soon joined the party.

During this period, computers were getting better and greater in terms of their abilities. And there was a commercial wave of AI.
AI research had a major success and it bubbled.

Expectations was again too high, and expert systems although proved useful, could not live up to expectations.
Eventually, the bubble burst. And what followed was another AI winter.

Finally: Reasonable Artificial Intelligence Arrived

After years of research and development in the field of artificial intelligence, conditions were finally right. One of the major things that initially limited developments in AI (computers low storage capacity and slow processing speed) was taken care of.

Computers became very fast and had high storage capacity even more than required.

In the 1990s with the milestone of the fifth generation of computers, which proved to be majorly a success, computers now had the abilities for artificial intelligence to flourish. And it did flourish.

IBM’s Deep Blue, a chess-playing computer program defeated Grand Master Gary Kasparov (the then reigning world chess champion) in a game of chess in 1997 and history was made.
From there onwards, there no going back: Probably no more Winters.

Checkmated


We’ve had artificial intelligence technologies write poetry, recognise objects in images, translate between languages, drive cars, fly drones, discover new uses for existing drugs, trade stocks, develop a scientific theory, beat humans on IQ test and notably beating the world champions of GO in 2016 and 2017.

It no longer feels like a myth when I tell you that I wrote this entire article using primarily a speech-to-text program and I did.
The sky is indeed no longer the limit. AI went from myths, fictions through years of scientific failure to today’s reasonable reality.

Classification of Artificial Intelligence

There are two primary ways artificial intelligence is classified. It is classified either based on functionality or based on technology.

In case you’re having difficulty comprehending the difference between technology and functionality, let me help you out.
Think about a television in the 1980s and compare it with the television we have today. Primarily, both the television of the 1980s and that of today have the same functionality but the technology behind them greatly differ.

Got the picture? Let’s move on.

Classification Based on Technology

Artificial Intelligence are grouped into four categories based on their technology. The categories are:

  • Reactive Machine Artificial Intelligence
  • Limited Memory Artificial Intelligence
  • Theory of Mind Artificial Intelligence
  • Self-Aware Artificial Intelligence

Reactive Machines AI

Reactive machine artificial intelligence systems are the most basic types. They neither store memory (gain experience) nor use past experience for present or future decision-making.

They are strictly just responsive and will react to similar situations exactly the same way all the time.
Reactive Machines can easily be fooled and they simply don’t learn. I call them the “Foolish, yet Powerful AI”.

Its quite difficult to come to terms with the fact that it was a reactive machine AI (IBM’s Deep Blue) that defeated the reigning world chess champion, Grandmaster Garry Kasparov in 1997.

Reactive machines have no understanding of the world or how it works. And they cannot function beyond the specific tasks for which the were programmed. Yet, they are intelligent enough to carry out complex tasks.

Examples of Reactive. Machine AI

  • IBM’s Deep Blue, the chess playing program that beat Grandmaster Kasparov.
  • Google’s AlphaGo, the Go playing program that beat Go world champions, Lee Sedol and Ke Jie.

Limited Memory AI

Limited memory is a step above reactive machines. It can store memory, in essence, gain experience and use past experience for present and future decision-making.
It functions effectively by accumulating observational data, building on a set of pre-programmed data. It learn from the past and have a basic understanding of the world and how it works.
It has been applied notably in different areas, such as in self-driving cars and in chatbots.

In spite of how great a step forward it is, it still has a limited memory. The data it stores as accumulated experience are quite not permanent.

Examples of Limited Memory AI

  • Self Driving Cars
  • AI-Controlled Traffic Lights
  • Chat bots
  • Personal Digital Assistants

Theory of Mind AI

The theory of mind AI is more advanced than limited memory. It is the level where artificially Intelligent systems can interact with humans socially. And basically, it will have the capacity to understand complex things like human emotions.

With Theory of Mind AI, AI systems will understand that we have thoughts and feelings, and expectations for how we should be treated.
And as at June 2019, at the time of writing this article, there is no complete theory of mind AI system. But we have some pieces of what will eventually form this system.

We have AI systems like Sophia that see using image recognition systems and respond to interactions with corresponding facial expressions.
At the very least, she exhibits some features of Theory of Mind AI systems. In 2017, she became the first robot to receive citizenship of a country (Saudi Arabia).
Sofia is current proof that we are not too far from achieving the theory of mind AI system.

Examples of Theory of Mind AI

  • Although not complete, Sophia from Hanson robotics and Kismet developed by Professor Cynthia are real examples.
  • Sonny from the 2004 film “I, Robot”
  • C-3PO and R2-D2 from the Star Wars Universe

Self-Aware AI

We’ve had unbelievable progress in the field of artificial intelligence but there is still a long way to go. And we want AI to be more advanced.

It’s in our nature to want to get better, better and better. When we had 3G we went for 4G, 4G arrived and we strived for 5G. Now we have 5G, calls for 6G are already out.

Humans always want to go beyond limits. Taking AI to the level of self-awareness, is going beyond limits.
Self-awareness in AI is the level where artificial intelligent systems become conscious of what they are.

It is when they become more aware of their needs and interests, and can understand the feelings of humans around them.
Self-Aware AI is considered an extension of the theory of mind and also not currently in existence.
But when attained, it would be the most advanced form of artificial intelligence.

Examples of Self-Aware AI

  • Agent Smith in the Sci-fi Movie, “The Matrix”
  • Eva in the movie, “Ex Machina”
  • Synths in the TV series, “Humans”

Classification Based on Functionality

Artificial Intelligence are grouped into three categories based on their functionality. The categories are:

  • Narrow/Weak Artificial Intelligence
  • General/Strong Artificial Intelligence
  • Super Artificial Intelligence

Narrow/Weak AI

Narrow Artificial Intelligence, also called Weak AI, is a type of artificial intelligence that is programmed to carry out a specific tasks. It applies intelligence for solving a specific problem and doesn’t function beyond its limits.

In Narrow AI, there is no genuine intelligence, no self-awareness no matter how sophisticated it can be.
Most of the currently existing types of artificial intelligence are Narrow AI. They are brittle and when they fail, they could cause disruptions.
But beyond its brittleness, narrow AI can be really helpful and even better when perfected.

Examples of Narrow AI

  • Personal Digital Assistants, like Apple’s Siri, IBM’s Watson, Google’s Google Assistant, Amazon’s Alexa and Microsoft’s Cortona.
  • AI managed traffic lights, like used in some Chinese cities; Shenzhen, Shanghai, Beijing and Hong Kong.
  • Chat bots

Artificial General Intelligence (AGI)

Artificial General Intelligence (AGI), also known as strong AI, is a type of artificial intelligence that can do virtually anything a human being can do intellectually. It is simply an artificial intelligence that is equal to human intelligence.


In contrast to Narrow AI, Artificial General Intelligence is not limited to a specific task, as it can perform any intellectual task at a human level.


Some AI researchers refers to it as the “genuine artificial intelligence”. it is the kind of artificial intelligence that was hoped for in the early days of AI research.
But how do we know or how can we tell that an AGI have achieved the level of human intelligence?

AI researchers have debated about what criteria are to be used to determine if an AI system has achieve the human level of intelligence.

So many tests has been proposed to ascertain if an AGI has reached human-level intelligence. Some of this tests includes the Turin test, coffee test, robots college students test, employment test and IQ test.

There is significant progress in a ‘True AGI’ development and Sophia the Robot is evidence.
And as we get closer to achieving this type of artificial intelligence many questions of old still remains.

Can an AI system reach human-level Intelligence? To borrow from Turing’s Paper, “Can Machines Think?”.
I think its a ‘Yes‘ and it’s only a matter of time

Examples of Artificial General Intelligence

  • Sophia the Robot when fully developed
  • Eva in the movie, “Ex Machina”

Super Artificial Intelligence

Super Artificial Intelligence is an hypothetical type of artificial intelligence that is way more intelligent in virtually everything than the smartest humans. It will be the ultimate result of decades of research and development in the field of Artificial Intelligence.

These type of Artificial Intelligent systems will be able do just about anything a human can do superiorly in every aspect.
Because of its great potential, Super Artificial Intelligence is thought to be a threat to human survival by many researchers and prominent figures.

The big question with Super Artificial Intelligence, “how can we (humans) control something that is superior to us?”. And perhaps because some can’t answer that question they think south (Doomsday scenarios). And I think reasonably, we should be.

Examples of Super Artificial Intelligence

  • Skynet in the Terminator Franchise movies

How AI Works

Honestly, how AI works is simple yet complicated. AI works by feeding on large amount of data, learn from the data in other to make decisions, predictions and in some cases take actions at a superhuman speed, efficiency and accuracy.

There are various components and concepts that form the backbone of how Artificial Intelligence works. They are mainly:

  • Machine Learning
  • Computing Power
  • Big Data
  • Cloud Computing
  • Artificial Neural Networks (ANNs)
  • Deep Learning

So let’s talk about them. Or should we?
I think we (or rather I) should explain them because in doing so you may come to understand how AI works, better.

Machine Learning

Machine learning as the name implies, is simply the science of making machines (computers) to learn by themselves and make decisions based on what it learnt in a human-like style without being explicitly programmed to do so.

More technically, machine learning is the science of getting computer systems to learn, improve their learning and perform tasks effectively and autonomously using algorithms and statistical models.

The term machine learning was coined in 1959 by Arthur Samuel. He developed the Samuel-checkers playing program which was among the world’s first successful learning programs. He played his own part in the development of machine learning and so did many others.

The idea behind machine learning is quite simple, get machines to learn from examples and experience by feeding them data and information instead. And its working.
Google search is a perfect example. The Google Search algorithms learns from data and information found on the internet and makes a decision about the different information that best answers a users query and rank them orderly.

Computing Power

Computing power boils down to how fast a computer can perform an operation with respect to accuracy and efficiency.

Different computers has varying computer power, hence, they tend to solve similar problems at different speed.

For example, machine A performs an operation in 2 seconds while machine B performs the same operation in 4 seconds. It means machine A has two times more computing power than machine B.

Computing power plays a pivotal role in artificial intelligence because the more computing power you have, the faster, more accurate and efficient AI systems will perform.
The historical development of artificial intelligence is closely tied to computing power. Compare the chess program that was made using the Ferranti Mark 1 with IBM’s Deep Blue, they performed with respect to accuracy, efficiency and speed at different levels partly because of the computing power they operated with.

The chess playing program that was made using the Ferranti Mark 1 took an average of 15 to 20 minutes to make a move after exploring thousands of possible moves while IBM’s deep blue could explore up to 200 million possible moves per second.
That’s Crazy!

And that’s was in 1997.
Now we have far more computing power and with that power AI systems are capable of being extraordinarily faster, more accruate and efficient than any human can. Without this computing power we might still be stucked in one game of chess move at an average of 15-20 minutes.

Graphics Processing Unit

Graphic Processing Unit (GPU) is a specialised computer chip that is capable of rendering graphics by performing rapid mathematical computations. They have almost 200 times more processors per chip than CPUs.

They were originally developed to handle computer graphics and image processing. But because of their highly parallel structure which makes them more efficient than CPUs in processing large volume of data they were adopted in AI development.

GPUs are pivotal for analysing high volume of unstructured data in other to train a deep learning system, they tend to greatly accelerate deep learning processes.

Big data

The name of these AI components seems to be betraying their definitions. What I mean is, take for instance, machine learning, from the name one can easily imply its definition.

Now Big Data.

Well, one can simply imply it means a DATA that’s BIG. But the understanding of the terms are equal not that simple.
According to the popular Gaslands definition (circa 2001), “Big Data is data that contains greater variety arriving in increasing volumes and with ever higher velocity“. This is widely known as the 3Vs: Variety, Volume and Velocity.


Therefore big data are voluminous data (volume), of many types (variety) that are arriving at a very fast rate (velocity). A rate at which traditional data processing software cannot manage.
These data (big data) are what AI systems feed on and they are stored primarily in the CLOUD.

Examples of Big Data

  • The New York Stock Exchange that generates about 1 terabyte of new trade data per day.
  • Social media sites like Facebook or YouTube that generate hundreds of terabytes of new data per day.

Cloud computing

What is cloud computing?
I would simply say it is computing over the cloud. But what is the cloud?
I have heard a lot about “the cloud” and I guess you too have. And I had for not knowing wondered if “the cloud” was the cloud in the sky.

Now I know what it is, I can tell you for sure that it isn’t the cloud in the sky but it is also not something new. The cloud is the Internet. And the internet is the cloud.
Thats simple. Right?
Yes it is.

Now, if the cloud is simply the internet, then, cloud computing is simply computing over the internet.
The need for cloud computing arises from the need to manage tons of data effectively and efficiently. Managing tons of data is quite complicated and expensive especially traditionally, when you have to manage them on-site.

According to Salesforce, on traditional computing, the amount and variety of hardware and software required to run them are daunting. You need a whole team of experts to install, configure, test, run, secure, and update them“.

But managing them over the internet is less expensive and easier.

In AI, cloud computing is a major solution to what had hindered development.

You see, we arrived at an Era of the Big Data. There was a lot data pouring in everyday for AI systems to feed on. But it was extremely difficult getting AI systems to feed on this data traditionally and effectively.

Cloud Computing

With cloud computing it was problem solved. AI systems now feed over the internet easily anywhere and everywhere.
Today, there are many cloud computing service providers. These service providers offer the necessary infrastructure (platform) for cloud computing usually on a pay-as-go model.

Examples of cloud computing providers

  • Oracle Cloud
  • Amazon Web Services.
  • Microsoft Azure.
  • Google Cloud Platform.
  • Adobe.
  • IBM Cloud

Artificial Neural Networks

Biologically, neural network is a network of neurons that transmits information throughout the body chemically and electrically. Neurons are the basic building block of the nervous system.
Structurally, neurons consists of three basic parts; the dendrites, cell body and the axon.

The dendrite functions as the receiving part of the neuron, it receives synaptic input. While the Axon functions as the transmitting part of the neuron.

Basically, the dendrites of a neuron receives an input, the axon transmit the input (as an output) across a gap (synapse) to the dendrite of another neuron. The chain continues through a network of neurons till the information gets to a target location.

In a broader view, the neural system receives information internally or from external environment, process the information at a high speed and then makes proper decision for outputs.

The role of artificial neural networks is not that different from the biological. The artificial neural networks functions in the same manner. They receive information from internal (in form of numeric data) or external sources, process the information at a very high speed and make proper decision for outputs.

Technically, Artificial Neural Network is an information processing framework consisting of thousands (in some cases, millions) of simple and densely interconnected processing nodes.
These densely interconnected processing nodes are made up of input, output and hidden layers. The hidden layers consists of units that transforms inputs into something that the output layer can use for making proper decision.

Artificial neural network functions as the means for machine (deep) learning algorithms to work together and process complex data input.

Artificial Neural Network

Artificial neural network enables AI systems receive information directly from the external environment and make meaning of it. ANNs are used by AI systems to recognize faces and understand natural languages.

Deep learning

In other to truly comprehend how deep learning works and what it is, one may need to know how the human brain works or rather how the human brain learns.

The human brain learns simply by experience, by example and by study: that’s how we learn and basically, how we define learning.

To learn (whether by experience, example, or study), our brains needs to take in floods of information about what it has to learn; process the information and make meaning of it (learn).

Our brain on its own cannot handle the flood of information that it’s processing; without which it cannot learn. It uses biological neural networks to facilitate collection and processing of the information through which it learns.

AI researchers developed Artificial Neural Networks to help AI systems collect, process and handle information (data) just like in the human brain. Basically, the more information to be processed, the more layers of neural networks that is required.

Therefore, data like Big Data requires a deep-neural network for AI systems to learn from them.
With the advent of artificial neural networks and big data, a new form of machine learning was made possible: Deep learning.

Deep learning is a type of machine learning that is inspired by the human brain, it uses deep-layered artificial neural networks to learn from large amount of data, in other to make decisions and predictions at superhuman level of accuracy.

With deep learning AI developers/researchers attempt to make AI systems not just to learn but to do so in similar manner as humans.

Artificial Brain Illustration

Deep learning networks are basically fed with flood information on a certain subject and it learns using those information. For example, if a deep learning network is feed with tons of information about foods, it will learn from the information and be able to differentiate a hot dog from a burger.

The AI system that was used by AI startup Dessa to create an AI-generated voice of Podcaster Joe Rogan flourished on deep learning. They feed deep learning networks with a lot of audio data of Joe Rogan’s voice and it learned to speak like Joe Rogan.

Deep learning is a major breakthrough in AI today. It’s said to be the core of present day AI discovery.

Conclusion

Artificial intelligence from many perspectives divide opinions since it’s foundations were laid. To some, it will develop into a threat to human survival and to others it will help shield us from threats to our survival.

Nonetheless, the rise of Artificial Intelligence is a strong statement of how advanced humanity is. We have come a long way through history, today, some things that were myths centuries ago are now reality.

I am optimistic that artificial intelligence will continue to develop, irrespective of the divides and that human will advance with it.
That’s my optimism, but for you I say:

Ask not what Artificial Intelligence will do to you, ask what you will do with Artificial Intelligence

Emeka Ewele


Editorial Staff

We help people like you invest successfully in AI, build AI startups, secure jobs and pursue amazing careers in the AI Industry.

10 Comments

Fernando Augusto Deheza Zambrana · July 24, 2019 at 6:53 pm

Importante y completo trabajo. Gracias por compartirlo.

    Emeka Ewele · September 3, 2019 at 2:30 pm

    Thanks

Mujahid Bakht · August 18, 2019 at 8:45 pm

Good information.

    Emeka Ewele · September 3, 2019 at 2:31 pm

    Thanks

AI TRAINING IN HYDERABAD · September 3, 2019 at 9:45 am

The mentioned Information is Good Useful to all the aspirants of AI

    Emeka Ewele · September 3, 2019 at 2:32 pm

    Totally agree.

AI training in hyderabad · October 14, 2019 at 11:17 am

I stumbled over here from a different website and thought I might as well check things out.
This is a very informative blog that resolved all my queries
Look forward to looking at your Web page more frequently

amel.fs · October 28, 2019 at 8:30 am

It’s worth reading Your Article

amel snv · October 28, 2019 at 8:31 am

Nice post

IDN Live · October 29, 2019 at 4:35 pm

Nice article, waiting for your next article

Leave a Reply

Your email address will not be published. Required fields are marked *