Complete Guide To Artificial Intelligence

Complete Guide To Artificial Intelligence

Although Artificial Intelligence is seen as a future scientific technology that we are only just breaking into now, it has actually been around since the middle of the 1900s.

Programmers coding an AI

For Artificial Intelligence (or AI) to even be a concept, people needed to have access to a digital electronic machine, which could carry out arithmetic operations – or as we know it, a computer.

To understand AI, you need to know how it works. But to see how AI can impact the future, you need to break down the past. Today we will explain everything you need to know about AI including its basic functions, evolution, usage, and benefits.

What Is Artificial Intelligence?

What Is Artificial Intelligence

Breaking AI down to its simplistic elements, Artificial Intelligence is a type of program that can think like humans think. This means it can simulate human intelligence, and even mimic our actions.

The most common trait that an artificially intelligent computer might show, is the ability to learn or problem solve. This means you could give the machine a problem, and it can overcome it after trying and failing.

For example, a robotic vacuum cleaner that can map your floor plan as it cleans knows where the walls are, and won’t bump into them. It will take a couple of attempts to understand the layout, but will soon learn your floor plan.

However, the concept that most shoppers and futurists consider when talking about Artificial Intelligence, is the computer’s ability to rationalize. This means taking in multiple correct answers and choosing one which can either best complete a goal, or create the least amount of problems.

For example, in the movie I, Robot, there was a crash that harmed the main character Del Spooner along with a family with kids. The robot saved Spooner as he had a higher likelihood of surviving, whereas Spooner thought the moral option was to save the child.

In this sci-fi scenario, the AI robot could rationalize the best person to save.

How Does AI Work?

But of course, that’s a movie and doesn’t show how AI currently works in our world.

In reality, an AI system will be given data, large sets of data, and told to scan it and find patterns. The program will take time to understand the data, then it will produce its own version. After that, it will compare its creation to the original data and then create a comparison test.

For example, if you were to feed an AI computer with Taylor Swift songs, and then ask it to write a song, you will end up with music similar to Taylor Swift’s style.

Without the original data for the AI computer to work with, it wouldn’t be able to create the end goal. This means that AI cannot work without good original data. The more data it has the more accurate it can be.

This means that human interaction is still needed to create the end result, and the AI cannot form concepts that haven’t already been suggested to it.

Types Of AI

Types Of AI

Generally speaking, there are four types of AI.

Limited Memory

Limited Memory AI may already be active in your life. It’s an AI system that learns from past experiences and builds up that knowledge like an encyclopedia. The AI then uses that historical data to create predictions.

Many writing programs, such as Microsoft Word, will have tools that suggest the rest of the sentence for you. This is a type of Limited Memory AI.

The reason for its “Limited” name comes from the lack of storage. To ensure a quick response, the data or history isn’t stored in the computer’s long-term memory.

Reactive Machines

Reactive Machines were the first type of successful AI, and because of this, they are also the most basic. This AI program doesn’t learn from its past. If you give it a question, it will respond the same way every time.

One simple example of a reactive machine is a calculator. It can add up your calculations and will give you the same response every time. But a more recent development that shows you how we continue to use this technology can be found in streaming companies’ recommendation systems.

For example, Netflix uses Reactive Machine learning to log which TV shows you watch and therefore which ones to suggest. Simply by watching a show or a movie, the program reacts and offers you new suggestions based on this information.

Theory Of Mind

Theory of Mind AI is one of the most interesting concepts inside the AI world – “I think therefore I am”.

In this concept, the computer is able to emotionally communicate with a human and even hold meaningful conversations. To do this, the computer needs to understand the complexity of human language, including tone, idioms, and abstract thought. It also needs to create decision-making ideas as fast as a human to keep up the conversation speed.

The most successful Theory of Mind AI system is a robot called Sophia. She can recognize faces, has her own facial expressions, and can hold conversations that feel as natural as speaking to a human.

Self-Awareness

The most advanced type of AI is one that is self-aware. This is a concept that we haven’t been able to successfully create yet, within the scientific community.

A successful self-aware AI will have desires, emotions, and needs, just like a human. They will be aware of their own emotional state and will react based on it.

To make this work, scientists need to embed a sense of emotion into the robot and then allow them to make connections between that emotion, their desires, and how the two are affected by stimuli.

For example, “I have not been able to complete a job, which makes me feel unproductive and unwanted.” or “I have made people laugh, and so I am happy.”

Read Guide To Python Coding

Evolution Of Artificial Intelligence

Evolution Of Artificial Intelligence

Artificial life isn’t a new concept. In fact, in Greek mythology, there is a story about a bronze man called Talos. Talos was built by the Greek god of invention – Hephaestus. Talos was designed to hurl boulders at enemy ships, anticipating their moves and finding the best weapon to create the most damage.

This concept was born centuries before the first computer but shows how humankind has always looked at using machines to do our bidding for us.

You can arguably say that the beginning of AI starts with the technological successes that made fiction a reality, but we cannot ignore the theorizing and abstract ideas which brought the concept to life.

1943

AI as we know it may have stemmed from centuries of creative thinking, but you can argue that the first steps towards AI as reality came from Walter Pitts and Warren McCullough. In December 1943, the pair published a paper called “A Logical Calculus of Ideas Immanent in Nervous Activity.” Together they were able to show the theoretical formula of how a brain works.

Walter Pitts was a neurophysiologist and Warren McCulloch was a cyberneticist. They combined their respective knowledge and produced a mathematical formula that showed how an artificial neuron can work like a brain’s biological neuron, birthing the idea of neuroinformatic.

1949

Only 6 years later, Psychologist Donald Hebb produced the book “The Organization of Behavior: A Neuropsychological Theory”.

This book focused on neuroscience and neuropsychology. It held years of research showing how the brain is both a functional organ which controls the body, and it contains the higher emotional and intellectual concepts of the mind.

Before this book, this idea couldn’t be confirmed. In the book, he showed that neurons in the brain which were used more frequently, became stronger than the rest. What is now known as Hebb’s Law, the book states:

“When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”

Or “Neurons that fire together, wire together”.

It means that when we do something physically, we often set a memory or learning experience at the same time. Mechanics now use this system to replicate human thought and reaction.

In the same year, Claude Shannon created a theory that would allow a computer to play chess, in his paper “Programming a computer for playing chess.”

1950

Alan Turing is the first person in our history lesson to move past psychology and anatomy, and instead, link these concepts to computers. From his paper “Computing Machinery and Intelligence” the “Turing Test” was born. This test is used to see if machinery can be considered intelligent.

The test is simple – can you tell that a computer created the answers to a question. For example, if you give someone 5 poems and ask them to pick out the one created by AI, and they don’t pick the AI-generated poem, then the AI can be considered intelligent.

In the same year, Isaac Asimov published a book called “I, Robot” which was adapted into a movie in 2004. In this book, there were the “Three Laws of Robotics”. Although this was a science fiction novel, many people working in the robotic industry consider these laws as standard. They were:

“First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”

1951

Using Claude Shannon’s chess theory, a Harvard undergraduate team of Dean Edmonds and Marvin Minksy created the first neural network computer. This means the computer performs like a brain and could problem solve.

1956

Although a lot of changes happened between 1951 and 1956, they all repeated what the founding scientists already knew. It wasn’t until John McCarthy produced the paper “Dartmouth Summer Research Project on Artificial Intelligence” that we saw another significant advancement.

This paper coined the term Artificial Intelligence. The paper was based on a 6 to 8-week brainstorming project where mathematicians and scientists joined forces to work on AI. In the 2-month time frame, the group created theories for natural language processing and the theory of computation.

1958

Using that research project, McCarthy created a research paper called “Programs with Common Sense”. In this paper, he developed the Lisp programming language which allowed computers to separate syntax from meaning. This allowed computers to follow human language.

It also contained a theory on how computers could learn from their experiences as we do. Instead of a pass-and-fail system, McCarthy created an almost emotional connection.

1959

In 1959, John McCarthy and Marvin Minksy founded the MIT Artificial Intelligence Project. Now known as the MIT Mind Machine Project, this area of MIT, to this day, creates AI that models thought, can use memory as a learning experience, and can create movement like a natural human body.

The project was set up to allow more scientists and students to work in the Artificial Intelligence field.

1964

The United States government created the ALPAC (Automatic Language Processing Advisory Committee) in 1964. This committee consisted of 7 scientists, and their aim was to create a translation machine to help government officials talk to people across the world. That was the public explanation anyway, in reality, the Cold War initiative meant that translating Russian became imperative.

The committee continued until 1966 when its own report believed that more basic research into computational linguistics was needed before even attempting to create a machine for translation. Essentially, the committee was created too early.

Unfortunately, instead of creating a basic team first, the US decided to scrap the ALPAC altogether. This in turn led to all government-funded AI projects being canceled.

1965

Wanting to continue the expansion of Artificial Intelligence learning, McCarthy started the AI Lab at Stanford University.

From this university AI scholarships were handed out, which created a wave of change allowing computer-generated music and art to develop. Stanford even created the first early robotic arms.

Early AI music didn’t contain vocals but they were able to create consistent baseline music, which you can still find on keyboards today.

1969

The expansion in Stanford continued and in 1969, a team of experts developed a system that uses AI to diagnose blood infections. Headed by Edward Shortliffe, a system called MYCIN was invented. It uses backward chaining, which is a process that uses logic to find unknown truths. For example, the computer would have all the medicine information (aka the solution) and follow the rules of the medicine, using a process of elimination to see which will work in curing the illness, and therefore what the illness is.

The system was able to produce accurate medical recommendations faster and with better accuracy than general practitioners.

1973

In 1973 a damning report called the Lighthill Report informed the British government that academic research into Artificial Intelligence wasn’t going well. The report was conducted by the British Science Research Council and it stated that “In no part of the field have the discoveries made so far produced the major impact that was then promised”.

This led the government into pulling funding from AI research in the majority of British universities. With the lack of bright minds working on AI, the British lagged behind the top guns in AI technology such as China.

The biggest issue the report had was that large real-world problems couldn’t be solved by AI programs. Instead, they were only good for small levels of problem-solving. Despite this meaning that progress was going in the right direction, it wasn’t enough for the British government.

1974

This period is known as the First AI Winter, as the Lighthill report created a domino effect across the western world. With fewer funds given to the leading researchers, there was a 6-year drought in AI progress.

1980

After half a decade of no progress in the AI world, the R1 was created. Now known as XCON, the R1 is a program created by John McDermot. This system created a new type of automation that allowed manufacturers to order new computer systems and be given the correct parts.

To paint a picture, In the tech industry sellers weren’t technologically adept and everything had to be sold separately. If you buy a computer today, everything is already connected, but once upon a time, you were given every wire separately. The sellers would often give customers the wrong cables, which lead to frustration and additional time.

The XCON program synchronizes items with orders matching the customer’s needs. This reduced wasted time and money on sending out additional parts. There was an instant investment into this software, which we now consider to be standard practice.

You could even argue that XCON was the first step into e-commerce.

For the rest of the 80s, new systems were being made, all of which required updates, and maintenance. Because of the additional costs needed for the updates, many companies went back to the pen and paper management style.

1991

Created by the U.S Military a new AI program called DART was created. DART stands for Dynamic Analysis and Replanning Tool. The tool uses data to process and management systems to create a planner. The planner can be edited rapidly by multiple access points to ensure that officials in the Military could see the costs and movements of each operation, thereby reducing costs by creating more logical timeframes.

Despite this technology still being used today, it wasn’t enough to bat away a second AI winter.

2005

In 2005 there was a big push to promote self-driving cars. For these cars to work well, they had to use AI. Technically the first-ever self-driving car was in 1939, but it uses magnetic to follow the road instead of AI.

The DARPA or Defense Advanced Research Project Agency wanted to create a fleet of driverless military vehicles and needed research to front the technology. To do this there was a prize of $2 million to whoever could win the Grand Challenge race.

The car nicknamed Standley won the race. It could navigate a mapped road and used its artificial reasoning skills to maneuver through unmapped terrain in real-time. Standley was created using 100 researchers, students, and mechanics at Stanford University and Volkswagen Electronic Research Laboratory.

There was a 10-hour limit to complete the course, and this race had sharp turns, a lot of obstacles, and a steep cliff. Of the 22 vehicles that were in the race, only 4 could complete the course.

2008

From this point onwards, Google, Apple, and Amazon start spearheading AI technology. No longer do Universities or governments fund Artificial Intelligence – it’s the companies taking charge.

In 2008 Google managed to make speech recognition technology, allowing handsfree users and blind users easier technology accessibility.

2011

In 2011 Apple released the first-ever Siri – an artificially intelligent assistant which can be operated through their iOS system.

Google followed suit creating a deep learning algorithm through YouTube. The neural network system was able to find cats without being told what a cat is. This showed a new level of deep learning.

It could do this by watching YouTube videos at random, some of which containing cats. It could then isolate when the video mentioned cats and the image it was seeing. After enough videos, it learned what was classed as a cat, without human intervention.

2014

In 2014 Amazon took the Siri system and developed it into Alexa, the virtual home assistant. Allowing users to listen to music, turn on lights, and search the internet through voice activation.

Google also created the first-ever self-driving car that could pass the United States driving test.

2016

In 2016, Sophia, the first robot citizen, was created. She can respond to normal conversations as if she was simply a speaker for a real person to talk through. She has a sense of humor and can judge another person’s emotional state.

2018

Google managed to create a natural language processor making translating between languages easier.

2020

During the global pandemic, the AI algorithm LinearFold was introduced to help predict how the virus would change and adapt, allowing vaccines to be created 120 times faster than before.

When Is Artificial Intelligence Used?

When Is Artificial Intelligence Used

As you can see from our brief history of AI, there are multiple ways in which AI tools have grown. Starting off as a way to understand brain functions, moving on to making work life productive and currently it is in a stage of helping humans with mundane tasks – like searching what 1 pound of cheese is in grams, using voice commands as your hands are covering in flour.

Generally, there are two ways in which AI is used: Narrow AI systems and AGI.

Narrow AI

Narrow AI

Narrow AI gets its name from its narrow usage. The program will follow a single task extremely well, but it isn’t able to do anything else.

It does the task so well, in fact, that it can come across as intelligent, however, its small skill set means there are more limitations than it may let on.

Common Narrow AI systems that we use on a daily basis include search engines and virtual assistants such as Siri and Alexa. These systems are the most commercially successful type of AI and are often powered by machine learning.

Your Alexa, for example, will understand your accent the more you use it. It will understand your needs and your questions the more you use it.

All of these systems tend to follow one real goal, such as finding something. Search engines find websites with the information you are after, and virtual assistants do the same.

AGI

AGI stands for Artificial General Intelligence. Also known as “Strong AI”, this term refers to the type of AI we expect from science fiction shows. Robots, supercomputers, and technology which can solve almost every problem.

AGIs are tools that the AI research community is aiming toward but haven’t reached yet. Sophia is the closest technology to reaching the AGI point, but no other piece of tech can claim to be AGI.

Benefits Of Using Artificial Intelligence

Benefits Of Using Artificial Intelligence

As AI constantly advances you may wonder what the point of this fake intelligence is. Well, there are many benefits to giving our computers more humanistic learning including our 6 points below.

Time Efficiency

Allowing computers to take on mundane tasks allows us to become more efficient. Repetitive tasks that need to be completed can be achieved in a quicker time slot when they are automated. Computers don’t need to take breaks and can complete boring tasks without becoming distracted. They will also complete the task to the same standard every time.

Multi-Tasking

Despite what many people like to believe, humans cannot multitask. This is because when we think we are multitasking, we are actually doing one small task and then switching to another small task, and then back again. We aren’t doing these tasks at the same time.

Computers, however, can act as though they have multiple brains at once. If one system is working on task A, another can work on task B to successfully multitask. However, if the system isn’t powerful enough to take on two tasks at once, it can at least switch between the two tasks faster than we can.

This again creates greater efficiency.

Eased Workload

Eased Workload

One of the best tools within AI is the ability to separate work and even entire tasks flow into more practical workloads. For example, law enforcement can use AI to narrow down suspect lists based on facts they have found. The AI tools can then eliminate suspects from their list much faster and more accurately than without the technology.

In general, your workload will be lifted as algorithms take away the mundane elements of your work, like sorting files into “urgent”, “due next week” and so on. This means you don’t have to spend your morning sorting through admin work to get to the meat of your job.

Execution Of Complex Tasks

Although AI can complete simple tasks with high efficiency, we cannot ignore how it handles complex tasks too. For example, you can have an AI system that can read large paperwork for you and create a summary of the file – essentially acting like an assistant telling you which file is needed for which occasion.

Your AI could also find patterns and highlight them to you faster than a human can. This can help flag potential problems faster, giving you more time to solve the issue.

Operates 24/7

Of course, unlike humans, computers don’t need to rest. Sure you should turn your computer on and off every now and then, but that is nothing in comparison to the amount of time off humans need to be functional and happy.

Your AI system can be running 24/7, 365 days a year, keeping your systems organized and constantly on high alert for any errors or issues coming your way.

Provides Faster And Smarter Decision Making

What takes a human hours or minutes to do, a computer can do in seconds. Mundane tasks and complex tasks alike can be completed much faster when completed by a computer, than if it was completed by a human.

Even simple things like writing can take a human a couple of minutes to complete after fixing typos, syntax errors, and colloquial misunderstandings. For a computer, “slips of the finger” isn’t a thing. They don’t waste time correcting errors, as there are none.

Summary

AI robot

Every year there is more development in Artificial Intelligence. We are currently going through another push in development, as virtual assistants, self-driving cars, and creations such as Sophia are continuously being worked on.

In the history of AI, the production has started in universities, was taken on by governments across the world, and is now flourishing in the commercial world. Google, Apple, and Amazon are battling it out to see who can develop the next best thing.

You may also follow the below AI tutorials: