Artificial Intelligence (AI): Part 1: Introduction
Artificial intelligence (AI) is a product of the technological age. Just a hundred years ago it remained firmly in the realm of fantasy. We believe that the quest for super-intelligence and immortality has lain at the heart of humanity since the Fall, and the discovery and possible applications for AI open up new vistas for man to pursue these things with no need for a benevolent Creator God. Therefore it’s important that we track the progress of AI.
The emergence of AI in the last 70 years or so is a sign that the end of the age is fast approaching, especially as several of the main players with which the end of the age (in Revelation) is associated are endowed with super-human abilities. It was the promise of becoming like God that caused the Fall according to Genesis 3, and god-like status is the same aim of scientists on the cutting edge today.
In this article we will look at the landmarks in the progress of AI. The full version of the information for this article, including video clips, can be found at http://www.bbc.co.uk/timelines/zq376fr .
The promise of intelligence
The idea that one day computers would be able to think like us took root back in the 1940s. Up until the last 25 years however, the results of experiments were disappointing. The pioneers’ dreams became a possibility when new approaches were coupled with enormous advances in technology.
1943: World War 2 triggers fresh thinking
The Second World War brought together scientists from many disciplines, including neuroscience and computers. Alan Turing, the brilliant mathematical mind behind the cracking of the Enigma Code, and neurologist Grey Walter tackled the challenges of intelligent machines. Walter went on to build some of the first ever robots. Turing went on to invent the ‘Turing Test’ which set the bar for an intelligent machine; in this case a computer that could fool someone into thinking they were talking to another person.
1950: Science fiction steers the conversation
It was Isaac Asimov who paved the way, with the publishing of ‘I Robot’, a collection of popular short science fiction stories. In them he imagined the future of machine intelligence and inspired a generation of roboticists and scientists. He also imagined developments that seem remarkably prescient – such as a computer capable of storing all human knowledge that anyone can ask a question of (Google is the modern equivalent).
1956: A ‘top-down’ approach
It was John McCarthy, a computer scientist, who coined the term ‘artificial intelligence’ for a conference at Dartmouth University. At the conference, top scientists debated how to tackle AI. Some, such as Marvin Minsky, favoured a top-down approach: i.e. pre-programming a computer with the rules that govern human behaviour. He went on to found the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology (MIT).
Others preferred a bottom-up approach, such as neural networks that simulated brain cells and learned new behaviours. Over time Minsky's views dominated, and together with McCarthy he won substantial funding from the US government, who hoped AI might give them the upper hand in the Cold War. The mission statement of the conference was:
Every aspect of learning or other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it
1968: 2001: A Space Odyssey – imagining where AI could lead
The science fiction film 2001: A Space Odyssey features an intelligent computer, HAL 9000, which was deemed fool-proof and incapable of error, as well as being able to experience emotion. The public were given, via this film, a vision of a world in which super-intelligent computers were operating, and of the fear which would be released if they malfunctioned. Minsky advised and inspired Kubrick in the making of this film. Two years later, Minsky is quoted in Life magazine (1970) as saying:
In from three to eight years we will have a machine with the general intelligence of an average human being
1969: tough problems to crack
In the UK, Shakey the Robot was developed, but it hardly lived up to the predictions of Minsky! Its movements were painfully slow and it was easily confused. Indeed, despite massive funding, AI had very little to show for it.
The state of AI in the UK was damned in a health report by Professor Sir James Lighthill which resulted in the slashing of funding. He said:
In no part of the field have discoveries made so far produced the major impact that was promised
1981: A solution for big business
By the 1980s however, AI’s commercial value started to be realised and new investment in it started. The new machines or ‘expert systems’ were narrowly focused and programmed for a very particular problem. The first successful commercial expert system, known as RI, began operation as the Digital Equipment Corporation, helping configure orders for new computer systems. By 1986 it was saving the company an estimated 40m USD a year.
1990: Back to nature for ‘bottom-up’inspiration
Expert systems couldn’t imitate biology, but AI scientist Rodney Brooks (who replaced Minsky as Director at MIT) was inspired by advances in neuroscience. He argued that the top-down approach of pre-programming a computer with the rules of intelligent behaviour was wrong. He helped drive a revival of the bottom-up approach to AI, including the long unfashionable field of neural networks. He published a paper: Elephants Don’t Play Chess.
1997: Man versus Machine: Fight of the 20th Century
A super-computer named Deep Blue, took on world chess champion Gary Kasparov in 1997 – and won. The machine was far superior to Kasparov, capable of evaluating up to 200 million positions a second. But could it think strategically? Yes, it could – to the extent that Kasparov believed a human person had to be behind the controls. This was the most developed ‘top-down’ machine, causing some people to hail the chess win as the moment AI came of age. Others believed it simply showed brute force at work on a highly specialised problem with clear rules.
2002: The first robot for the home
Rodney Brook’s spin-off company, iRobot, created the first commercially successful robot for the home, an autonomous vacuum cleaner called Roomba. Now, over 10 million units have been sold globally. It was a far cry from Shakey the Robot! Roomba had relatively simple sensors and minimal processing power yet it had enough intelligence to reliably and efficiently clean a home. After Roomba, other autonomous robots were ushered in, focused on specific tasks.
2005: War machines
AI failed to accomplish anything in the Cold War, but the US military began to invest in autonomous robots. BigDog, made by Boston Dynamics, was one of the first. It was designed as a robotic pack animal for terrain too rough for conventional vehicles, but has never seen active service. iRobot also became a big player in the field. Their bomb disposal robot, PackBot, marries user control with intelligent capabilities such as explosives sniffing. Over 2000 PackBots have been deployed in Iraq and Afghanistan.
2008: Starting to crack the big problems
In November 2008, a Google app with speech recognition appeared on Apple iPhones. Whilst at first it was highly inaccurate, by 2015 its word error rate was just 8%. The new approach was to use thousands of powerful computers running parallel neural networks (in information technology, a neural network is a system of programs and data structures that approximates the operation of the human brain) learning to spot patterns in the vast volumes of data streaming in from Google’s many users. Google’s co-founder, Larry Page, said this, in 2002:
Artificial intelligence would be the ultimate version of Google. It would understand exactly what you wanted and it would give you the right thing
2010: Dance bots
At the same time as massive mainframes were changing the way AI was done, new technology meant smaller computers could also pack a bigger punch. These new computers enabled humanoid robots, like the NAO robot. At Shanghai’s 2010 World Expo, some of the extraordinary capabilities of these robots went on display, as 20 of them danced in perfect harmony for eight minutes.
2011: Man versus Machine: Fight of the 21st Century
In 2011, IBM’s Watson took on the human brain on US quiz show Jeopardy. This was a far greater challenge than the chess match. Watson had to answer riddles and complex questions. Its makers used a myriad of AI techniques, including neural networks, and trained the machine for three years to recognise patterns in questions and answers. Watson beat his two opponents – the best performers of all time on the show. It was hailed as a triumph for AI.
Today, Watson is used in medicine. It mines vast sets of data to find facts relevant to a patient’s history and makes recommendations to doctors.
2014: Are machines intelligent now?
A chat-box called Eugene Goostman finally passed Turings test of machine intelligence, although it was not seen by many as a ground-breaking moment. Rather, Google invested billions of dollars in driverless cars, and across four states in America it is now legal for driverless cars to take to the road. Skype launched real-time voice translation too, so intelligent machines are fast becoming an everyday reality that will change all of our lives.
2016: Where next?
The BBC timeline ends at this point, but we have added a couple of up-to-the-moment developments.
Firstly, in January 2016, Science magazine ran this article, by Adrian Cho:
Huge leap forward: Computer that mimics human brain beats professional at game of Go
Go is an ancient eastern board game which is so computationally demanding that even a decade ago some researchers thought a computer would never defeat a human expert. The number of possible moves is astronomical. After just five moves, the board can be in any of more than 5 trillion arrangements. In total, the number of different possible arrangements stretches beyond 10 to the power of 100! – far beyond the possibility of a computer playing by brute force computation of all possible outcomes.
It’s only because the computer, AlphaGo, has been programmed to rely on ‘deep neural networks’ which have the capacity to learn, that it has succeeded. Its ‘machine learning’ tools enable it to teach itself and to think more like humans. The researchers fed it a database of 30 million board configurations and subsequent plays by expert players, then the computer played against itself over and over again thus learning from experience to tell a better move from a poorer one. One researcher said:
The way we've developed the system, it plays more like a human does
At the time of writing this article (beginning of March 2016) AlphaGo beat the world champion – who was simply incredulous that such a thing was possible.
Secondly, the most recent generations of iPhones have fingerprint recognition, doing away with the need for a four figure PIN. In February 2016, HSBC announced that it was launching voice recognition and touch security services in the UK in a big leap towards the introduction of biometric banking. Customers will no longer have to remember a password or memorable places and dates to access accounts. This will be implemented within weeks.
Francesca McDonagh, HSBC UK’s head of retail banking and wealth management said:
The launch of voice and touch ID makes it even quicker and easier for customers to access their bank account, using the most secure form of password technology – the body
Customers who want to use the service will have to enrol their ‘voice print’ and the bank says the system will still work when someone is ill.
Man’s knowledge of AI is increasing at an incredible rate, and his ability to create super-human machines is astonishing. Keep alert to other current developments as news breaks.
The quest for super-intelligence and immortality has lain at the heart of humanity since the Fall, and the discovery and possible applications for AI open up new vistas for man to pursue these things with no need for a benevolent Creator God