In 1997, the when the supercomputer Deep Blue finally achieved its revenge victory over Gary Kasparov, a long-time dream of many A.I. enthusiasts was fulfilled. It was said for years before that the day computers would master chess would be the day when the Artificial intelligence dream would be finally fulfilled. the event was well-publicized as a triumph of Artificial intelligence, an industry fuelled by optimism as much as a mechanistic philosophy regarding human intelligence itself.
The makers of Deep Blue, IBM, however declared on its website (1) that no formula exists for human intuition, and deep blue in no way utilized any form of Artificial intelligence. in addition, no psychology was at work here, and neither was any element of learning involved. Deep Blue was simply thought to excel in memorizing a huge number of chess moves and could work out very fast the correct move in any given situation. Perhaps therefore Deep Blue did display some intelligent behavior, but can this be considered a simulation of human intelligence?
Without going into any deep philosophy, what was programmed in deep blues circuits was huge repertoire of legal moves, or mathematical possibilities, faced by any chess player, human or computer, at any given point in a game. This was the coding itself of human experience, as logical choices presented to a chess player trying to pick out the best move possible at a given point in the game. In real terms therefore it was a game contested between Garry Kasparov and and a large group of computer programmers who had already assembled a huge database of possible moves in Deep Blue and made a strict rules about the picking the best one under any given circumstance. No matter which way you look at it, it is human intelligence versus human intelligence, reinforcing the common sense notion that machines cannot think.
The Chinese Room problem was designed by philosopher John Searle, in an attempt to illustrate the cognitive element in any form of so-called artificial intelligence. Does simulating a mind actually imply having a mind? In this thought experiment, an observer poses questions in Chinese to an entity behind a closed door using papers pushed under the door, and the intelligent machine replies to these questions in Chinese by returning the appropriate answers under the door too. Searle however uses analogy of the squiggles in Chinese as a substitute for a computer programming language, raising the vital issue that being able to simulate an understanding of Chinese does not actually imply real understanding. Intentionality, the real intention to carry out a task lies at the heart of Searle’s argument that the true artificial intelligence cannot be said to really exist.
Perhaps therefore, at the heart of artificial intelligence lies the human conscious desire to replicate a mechanistic model of its own mode of thinking, or perhaps in a more futuristic sense extend our capabilities for thinking way beyond our natural abilities. At our current level of technology, we have robots who can play table tennis, defeat grandmasters at chess and even fulfill customer service roles in reasonably convincing fashion. But nothing about what we call artificial intelligence is perhaps ‘artificial’ in these actions. The robots are designed in a particular way to mimic a particular group of functions, without any real intent behind them, save our own human intentions to let a machine mimic a particular aspect of our own behavior.
Even with a tremendous surge in computing power we have not been yet able to mimic the basic processes of human thought, and we are far from developing any element of intent in our programmed robots.
Perhaps we should drop the terminology ‘artificial intelligence’ for the foreseeable future, and substitute it with ‘simulation of intelligent behavior’.
67-68 Hatton Garden
London EC1N 8JY