Origin and Significance of Artificial Intelligence

Artificial Intelligence is the pursuit of metaphysics by other means.

Christopher Longuet-Higgins


Origin and Significance of Artificial Intelligence

Origin

“Artificial Intelligence” was first introduced by an American scientist John McCarthy who was a pioneer of artificial intelligence and computer sciences in a workshop at Darmouth College in 1956. But doubts about the phrase have grown since then and English code-breaker and computer scientist Donald Michie's previous version, 'Machine Intelligence', which it ousted, is making a comeback, and will be revived when the important journal Nature Machine Intelligence begins publication in 2019. That will be a badge of scientific respectability for sometimes dubious field, where the word 'artificial' has come to have overtones of trickery.

McCarthy said firmly that AI should be chiefly about getting computers to do things humans do easily and without thinking, such as seeing and talking, driving and manipulating objects, as well as planning everyday lives. It should not, he said, be primarily about things that only a few people do well, such as playing chess or GO, or doing long division in their heads very fast, as calculators do. but Michie thought chess was a key capacity of the human mind and that it should be at core of AI. And the public triumphs of AI such as beating Kasparov - the then world champion - at chess, and more recently by playing world championship Go, has been taken as huge advances by those keen to show the inexorable advance of AI. But I shall take McCarthy's version as the working definition of AI.

There can, then, be disputes about exactly what AI covers, as we shall see. The history is important, because although AI now seems everywhere, at least according to newspapers and media, and is pressing upon every human skill, it has actually been around for a long time and has lapped up around us very slowly. Here is a dramatic example: the road sign below was at the end of the driveway of the Stanford AI laboratory when I was there in the early 1970s.  

It is important to see how long AI has been gestating, slowly but surely, even though it has been a bumpy ride with major setbacks. For example, in 1972 and 1973 AI suffered two major setbacks: the first was a book called What Computers Can’t Do, by the philosopher Hubert Dreyfus. He called AI a kind of alchemy (forgetting for a moment that alchemy – an early form of chemistry which posited that metals could be transformed into each other – has actually turned out to be true in modern times with the discovery of nuclear transmutation!). Dreyfus’s central point was that humans grew up, learning as they did so, and only creatures that did that could really understand as we do; that is to say, be true AI. Dreyfus’s criticisms were rejected at the time by AI researchers, but actually had an effect on their work and understanding of what they were doing; he helped rejuvenate interest in ma- chine learning as central to the AI project.

Origin and Significance of Artificial Intelligence

The following year, Sir James Lighthill, a distinguished control engineer, was asked by the British government to examine the prospects for AI. He produced a damning report the effect of which was to shut down research support in the UK for AI for many years, though some work was continued under other names such as ‘Intelligent Knowledge Based Systems’. Lighthill’s arguments about what counted as AI were almost all misconceived, as became clear years later. He himself had worked on automated landing systems for aircraft, a great technical success, and which we could easily now consider to be AI under the activity of simulating uniquely human activities and skills.

Lighthill considered that trying to model human psychology with computers was possible, but not AI’s self-imposed task of just simulating human performances that required skill and knowledge. He was plainly wrong, of course – the existence of car-building robots, automated cars and effective machine translation on the web, as well as many AI achievements we now take for granted, all show that. Although a philosopher and an engineer respectively, Dreyfus and Lighthill had something in common: both saw that the AI project meant that computers had to have knowledge of the world to function. But for them, knowledge could not simply be poured into a machine as if from a hopper. AI researchers also recognized this need, yet believed such knowledge could be coded for a machine, though they disagreed about how. Dreyfus thought you had to grow up and learn as we do to get such knowledge, but Lighthill intuited a form of something that AI researchers would describe as the ‘frame problem’ and he thought it insoluble.

The frame problem, put most simply, is that parts of the world around us ‘update’ themselves all the time depending on what kind of entity they are: if you turn a switch on, it stays on until you turn it off, but if it rains now, it very likely won’t be raining in an hour’s time. At some point it will stop. We all know this, but how is a computer to know that difference: that one kind of fact true now will stay true, but another will not be true some hours from now. We all learn as we grow up how the various bits of the world are, but can a computer know all that we know, so as to function as we do? At a key point in the film Blade Runner, a synthetic person, otherwise perfect, is exposed as such because it doesn’t know that when a tortoise is turned over, it can’t right itself.  The frame problem is serious and cannot be definitively solved, only dealt with by degrees. There have been many attempts, in AI and in computing generally, to prove that certain things cannot be done. Yet, in almost all cases these proofs turn out to be, not false, but useless because solutions can be engineered to get round the proofs and allow AI to proceed on its way. According to legend, Galileo, when before the Inquisition, where he was told firmly that the Earth could not possibly move, muttered under his breath the words ‘Eppur si muove’ – ‘and yet its moves’!  Marvin Minsky at MIT, one of the great AI pioneers, once said that, yes, they ask for AI progress and it’s hard to spot sometimes, but when you come back ten years later you are always astonished at how far it has moved.  The ghosts haunting AI over the years, telling its researchers what they cannot do, recall the ‘proofs’ given that machine translation (MT) was impossible. MT is another computer simulation of a very human skill that we could now consider a form of AI. In 1960, the Israeli philosopher Yehoshua Bar-Hillel argued that MT was impossible, because to translate language the system would have to have an enormous amount of world knowledge. His famous example was the book was in the pen where he argued that a computer would have to know a book could fit into a playpen but not into a writing pen, if it was to get the right sense of the word ‘pen’, and so translate the sentence out of English into some language where those were quite different words. This is an almost exact correspondence with the frame argument mounted against AI. Again, the everyday availability now of free MT of reasonable quality, from sources like Google Translate, shows how wrong Bar- Hillel was, though he was very influential at the time and widely believed.  

Comments

  1. I just saw your post and I think you have done a great job. This blog is very informative, it really well defines Artificial Intelligence! I'm glad there are people like yourself who care about this topic because they need all the attention, I found something new with FastEdge AI using your suggestion.

    ReplyDelete

Post a Comment