Evolution of AI through the ages

Vikas Kulhari

This is a guest post by Vikas Kulhari. Vikas is an Intelligent Automation Consultant at KPMG. He is a Certified Solution Architect helping clients design, create and maintain Intelligent and Robotics Process Automation (RPA) solutions.


Artificial Intelligence (A.I.) is not new technology anymore.

Most of the sectors have already started investing in AI research and implementation. It is ubiquitous now – Autonomous vehicles, Voice controlled bots, Facial Recognition, computer vision, ICR, search recommendations, robots, etc.

However, all of you may be thinking:

  • Who invented AI?
  • Who coined this term (AI)?
  • Where did all this begin? 

So, I thought to write a post about the AI journey. Here is a brief Timeline as I see it:

1943 – Turing Machine: Alan Turing invented the Turing test, which set the bar for the intelligent machine; the computer that could fool someone into thinking they were talking to a real person. Grey Walter built some of the first-ever robots.

1950 – I, Robot: It was published a collection of short stories by science fiction writer Isaac Asimov.

1956 – Artificial Intelligence: John McCarthy coined the term “Artificial Intelligence”. A “top-down approach” was dominant at the time: pre-programming a computer with the rules that govern human behavior. 

1969 – Shakey The Robot: The first general-propose mobile robot was built. It was able to make decisions about its actions by reasoning about its surroundings.

1968 – 2001: A Space Odyssey: Marvin Minsky, the founder of the AI Lab at MIT, advised Stankey Kubrick on the film 2001: A Space Odyssey, featuring an intelligent computer, HAL 9000.

1973 – AI Winter: The AI Winter began – millions had been spent with little to show for it. As a result, funding for the industry was slashed.

1981 – Narrow AI: Instead of trying to create a general intelligence, research shifted towards creating “expert systems”, which focused on much narrower tasks. 

1984 – Bottom-Up Approach: Rodney Brooks spearheaded the “bottom-up approach”. aiming to develop neural networks that simulated brain cells and learned new behaviors.

1998 – Deep Blue: Supercomputer Deep Blue developed by IBM, Faced world chess champion Garry Kasparov.

2002 – Roomba: iRobot created the first commercially successful robot for the home – an autonomous vacuum cleaner called Roomba.

2005 – BigDog: The US military started investing in autonomous robots. BigDog, made by Boston Dynamics, was one of the first.

2010 – Dancing NAO Robots: At Shanghai’s 2010 World Expo, 20 NAO robots danced in perfect harmony for 8 minutes.

2011 – Watson: IBM’s Watson took on the human brain in jeopardy and won against the two best performers of all time on the show.

2014 – Eugene Goostman: 64 years after the test was conceived, a chatbot called Eugene Goostman passed the Turing Test. Additionally, Google invested a billion dollars in driverless cars, and Skype launched real-time voice translation. Amazon launched Alexa, an intelligent virtual assistant with a Voice.

2016 – TAY: Tay was Microsoft’s chatbot. It caused some controversy when the bot began to post inflammatory and offensive tweets through its Twitter account. Microsoft then shut down the service after only 16 hours of launch.

2017 – AlphaGo: Google’s AlphaGo was the first computer program to defeat a professional human Go, player, the first to defeat a Go world champion, and was arguably the strongest Go player in history.

2018 – Google’s fascinating—and creepy—AI: it could make calls on behalf of a user and perform tasks such as booking restaurant tables and hair salon appointments.

2019 – Tesla and Scania’s Autonomous Vehicles: Tesla and Scania have already come up with concept self-driving cars and trucks. Scania trucks don’t have a cab and that may be a game-changer.

As AI grows rapidly, you would see a lot of big-scale AI projects shortly.


Like to make a Guest Post? Would love to hear from you.

One Artificial Intelligence(AI) that has made the difference

..what we want is a machine that can learn from experience. – Alan Turing (1947)

Artificial Intelligence went through phases of boom and bust – AI winters. What has changed now? Why is it different this time?

Artificial Intelligence (AI) has always been about making machines as smart as us. We like to have machines that can emulate human thinking and behavior. People have been thinking of such machines since 600 BCE which are expressed in myths, stories, and automatons. The current quest for Artificial Intelligence began around 60 years back with the Dartmouth summer research project.

First Phase: Program the computer with AI algorithms

For the first six decades of modern Artifical Intelligence (AI), researchers mostly used logical rules to tell the computer what to do. For eg. If we need a program to correct spelling errors, we came up with lots of rules and data. This then helped the computer detect an error and suggest correct spellings. This is how most of our computer programs are and that is naturally how we approached Artificial Intelligence. This approach had a few initial successes before it ran into disappointments. People soon discovered there were major limitations modeling intelligence this way.

Current Phase: Let the computer figure out the algorithm

After multiple AI winters, we discovered a new way – Machine learning. It is not a new way as such and was not discovered overnight. It has evolved and we now have a perfect storm of better data and processing power making this a useful technology.

In this approach, we feed the computer a bunch of example data and let the computer figure out the algorithm. If we give the computer a picture of a cat and teach it to distinguish that picture from the other pictures, we can then ask it to decide if subsequent pictures are cats.

The computer internally uses techniques like logistic regression to figure out the algorithm using deep learning neural networks.  Machine learning (more specifically Deep learning) is currently the part of Artificial Intelligence that works!

Machine learning is a paradigm shift – computers are now learning to program themselves.

Transformation of Machine Translation through AI 

Take the example of machine translation which is now approaching human-level accuracy. The earliest versions of computer translation systems worked using complicated rules and were programmed one by one. The system looked up dictionary data to translate individual words and then added language-specific grammar and context rules to improve results. This worked well only in a few cases as a language like most of the things in the real world do not adhere to any fixed set of rules.

To improve upon this model, we started using statistical models in place of grammar rules. In this model, translation systems were built using exact text that was translated by humans earlier. The system scored the possible translations based on past data and came up with the best translation possible. These statistical systems performed much better than the rule-based system earlier but still was complicated to build. The computer was helping us but we still had to find experts to tune the data and rules.

Enter Machine learning! It is a black box that learns to translate by looking at training data. We do not need any experts – the computer figures out the rules by itself. It uses “neural networks” that understands the world through trial and error just like a child growing up would. When Google introduced Machine learning to improve their Translation services in 2016, the improvements were substantial and changed the service overnight

This Machine learning model works for any language as long as you have enough training data for the language. So unlike in the past, we do not need linguists in various languages. We also do not need to program the models separately for each language. This approach also works for many other problems like describing what is in a photo or helping chat bot provide intelligent responses using past data.

Artificial Intelligence (AI) in action

The computers are beginning to learn and do things better in certain narrow areas. You may have seen some of them in action.

Face recognition: You may have seen this in on Facebook and also on Google photos. Facebook’s Deepface claims to be 97% accurate.

Speech recognition: This makes Amazon Alexa and Google assistant possible. We are at about 97% accuracy now.

Games: Probably the most visible use with Google AlphaGo and Watson. AlphaGo Zero now learns by simply playing games against itself.

There are many more areas from image classification to genomics where AI has been getting better. AI/ML will certainly continue to get better in more and narrow areas.

What does this mean for us?

So, how is this going to impact us? It is hard to predict and there have been many studies that indicate little to major disruptions. What we do know is that this is a paradigm shift and will change the way we do things in various areas. There are a few possibilities:

Software 2.0: This is a shift in the way we write software. We can now collect the data and let the computer program for the problem in the best possible way without human limitations. As Andrej Karpathy says, for many tasks, it’s easier to collect the data than to explicitly write the program

Solve harder problems: We can look to solve many problems that programmers have no idea how to solve. Look at the example of Face recognition which would have been very hard with the traditional way of programming.

Robotic vs Human jobs. As they say, RPA/AI technology takes the robot out of the human. A distinct area where machines are better than humans is emerging. You will be better off moving to areas which emphasize your strengths.

Do more with less: It is certain that organizations will start doing more with less. Automation and optimization will ensure that people are more productive with their machines. This is evident in companies like Facebook, Apple, and Google that bring in much higher revenue per Employee.

An AI first world is in sight. Intelligent machines will more and more augment what you do. If you are not looking at ways to augment traditional work with machines and developing new processes, you will likely be left behind.

“Fortune favors the prepared mind” – Louis Pasteur