Ezhkova I.V.

Golden Ages of AI[1]

Now when e-Housing, e-Commerce, e-Democracy and e-Homo occupy our minds, when recent results in biological, cognitive and neuro-sciences essentially declare our vision of origin, evolution and role of intelligence it is time to look back to the most important moments in the field of AI and to applaud AI, as the grandmother of many of those fields, for its ever-lasting inspirational beauty, generosity, openness and pragmatic rationality.

Ten Years Ago…

"To explain a phenomenon is not to demean it. An astronomical theory ... or a chemical model ... does not lessen the fascination of the heavens at night or the beauty of the unfolding of a flower. Knowing how we think will not make us less admiring of good thinking."

- Herbert A. Simon

While most sub-disciplines of computer science seem to be defined by their methods, AI seems more to have been defined by the source of its inspiration. In 1950, Alan Turing published his famous paper "Computing Machinery and the Mind", which can be metaphorically described as posing the problem of how a computer could distinguish a male from a female merely by asking questions. This problem has become known as the "Turing Test" and has served as an inspiring vision and philosophical charter for many years since the beginning of AI.

At the same time, the mainstream of researchers in AI simply understand the task of AI as engineering of useful methods (artifacts), without even much reflection on whether these take account of human attributes or cognition. Until now three historically important problems of AI are not fully linked in the field. These problems are: definition of the field; adequate tools; and real applications. The situation around this problem was deeply discussed ten years ago on the IJCAI’95 conference, which as it is seen now, was an important milestone in the development of AI. This conference finally opened the door of AI to many different methods and techniques which earlier were not so obviously acceptable to AI such as classical statistics, pattern recognition, stochastic approaches, and so on. Taking into account this consolidating role of the IJCAI’95 in this paper we suggest a review of this conference, trying to restore the atmosphere of this important event for AI. This review was written for the European Commission and it was supposed to underline the most important trends in AI as they appeared at the IJCAI’95.

It was time when after the euphoria of the early 1980's in the first AI applications to expert systems, researchers began to understand that the AI systems were very brittle in their narrow orientation to a-priori well defined domains, and that the problems which arose in AI very often were problems of the traditionally used methods themselves and not of AI. The IJCAI’95[2] conference demonstrated that, despite the separation of applications, Artificial Intelligence was trying to reinforce a foundation based on diverse schools of thought.

At this time the AI environment ripened to produce favorable new methods and create valuable applications. This environment was prepared, first, by thinking deeply about the human aspects of AI, and by rethinking how important may these matters be to the definition and purpose of the field, AI's pragmatic construction and realization. Second, there was no longer a defense offered to the use of supposedly "unacceptable" methods, but rather there existed a healthy research about what methods may be used for the development of AI (stochastic, random, local, incomplete and even unsatisfiable), outside of the traditional systematic paradigm.

There was a blossoming of research using diverse techniques, as illustrated at that moment generation by the rising of very promising researches such as Russell (on the purposes of AI), Selman (on the methods of AI) and Stephan Muggleton of the Oxford University Computing Laboratory (on concrete techniques) as well as many others. These were all developments that were supposed to lead to better and more useful applications.

At the IJCAI’95 conference, the sessions on the applications of artificial intelligence were separated into a separate conference. At the same time, the papers in the main conference demonstrated that AI techniques were being more strongly incorporated directly into systems development themselves, such as software engineering, multimedia creations, and World Wide Web/Internet support.

However, looking back to the last decade we must acknowledge that it remains still important that these developments did not remove the importance of cognition in AI. In this paper by observing AI situation a decade ago, we inspect again the inspirational roots and pragmatic prospective of AI as they were seen at that moment in order to pose the same important for AI questions about the equilibrium between intelligence, mind and pragmatic rationalities.

How to define Artificial Intelligence

I previously mentioned the disconnection among definition of the field, adequacy of tools, and applications, which existed in the field of AI at that time. This report will now turn to understanding how these were finally linked by IJCAI’95, as it was demonstrated by papers and exhibits there. At the IJCAI’95 sections the subject of definition of the field, including those on cognition and philosophical problems, showed up in many contexts. Most invited lectures and one of the three panels at the conference dealt with definition in some form.

One invited lecture of the IJCAI’95 -- "Turing Test Considered Harmful" by Patrick Hayes of the Beckman Institute and Kenneth Ford of the University of West Florida -- argued that the Turing Test now leads the field to disown and reject its own success. Hayes argued that AI should not be defined as an imitation of human abilities. If we abandon the Turing Test vision, then "... the goal naturally shifts from making artificial super-humans which can replace us to making super-humanly intelligent artifacts which we can use to amplify and support our own cognitive abilities..." Hayes characterized the vision of "making artificial super-humans" as the initial goal of the pioneers of the field, Feigenbaum, McCarthy and Minsky. Hayes then defined AI as "the engineering of cognition based on computational vision which runs through and informs all of cognitive science".

The pioneers of the field, such as Simon, McCarthy, Minsky and Feigenbaum, were in fact guided by the study of human intelligence, and sought deeper understanding of human cognition with the hope that this effort would lead to better machines. Despite the mainstream engineering outlook mentioned above, a two-part panel of the IJCAI’95 presented by John McCarthy of Stanford and Aaron Sloman of The University of Birmingham ("Philosophical Encounter, An Interactive Presentation of Some of the Key Philosophical Problems in AI and AI Problems in Philosophy") made clear that the cognitive orientation remains a force in the field.

According to McCarthy "... human level artificial intelligence requires equipping a computer program with a philosophy. The program must have built into it a concept of what knowledge is and how it is obtained." He pointed out that he considers his main purpose now to be the development of a theory of "contexts". This term arises, as a necessary consequence when one wants to deal with knowledge properly. McCarthy thus also believes that "Mind has to be understood one feature at a time".

As a practical matter, Sloman offered similar views, for example, that "...'mind' is a cluster concept referring to an ill defined collection of features, rather than a single property that is either present or absent." Both Sloman and McCarthy also talked of "stances" which are levels of analysis one may make of a system, such as physical, intentional, design and function. Both assert that philosophy and AI still have much to offer each other.

Rationality and Bounded Optimality

Stuart Russell of the University of California at Berkeley, one of the rising and more promising workers in AI at that moment since his “The Computers and Thought Award” paper given at the IJCAI’95, observed different definitions of AI in this paper. He argued that the most relevant position of AI is "to create and generate intelligence as a general property of systems, rather than as a specific property of humans". Following the agent-based view of AI he began with the idea that intelligence is strongly related to the capacity for successful behavior. He then argued that the "bounded optimality" predicate has to be considered as a useful formal definition of intelligence. This predicate was defined as "the capacity to generate maximally successful behavior given the available information and computational resources." Loosely speaking, he considers the intelligent entity and rational agent whose actions make sense from the point of view of the information possessed by the agent on its goal.

This position was well received by the majority of the AI researchers. However it is useful to remember that rationality is only a property of actions and not of the process by which it was designed, which leads to the observation that the only important thing is what the agent does and not what the agent thinks or even whether it thinks at all. Nevertheless Russell represents a very pragmatic point of view of AI, which serves to reduce the gap between theory and practice. Based on this viewpoint, Russell, in association with others, presented an additional four talks in the technical sections in the same IJCAI conference. Two of these ("Local Learning in Probabilistic Networks with Hidden Variables" and "The BATmobile: Towards a Bayesian Automated Taxi") were among the more interesting papers of the technical sections of this conference.

Intuition, Insight and Inspiration

Let us turn now to the Research Excellence Award paper given by Herbert A. Simon, one of the most established and respected workers in the field. Simon dealt with how to decide if AI can perform three admirable human thought functions: intuition, insight and inspiration. He argues that theories written in AI list-processing languages should be tested in exactly the same way as are theories in the analytical mathematical languages of the physical sciences: to judge the accuracy of the theory by comparing behaviors predicted by the theory to empirical experience. He then gave operational definitions of intuition, insight, and inspiration and showed that existing AI programs already exhibit these.

The program EPAM demonstrates intuition (rapid discovery of a new idea without being able to trace a path of logic to reach it); a program, which combined EPAM with a General Problem Solver, would demonstrate insight (finding a new correct solution to a problem after a period of prior failures); and a program such as BACON demonstrates inspiration (new knowledge). Evaluating such seemingly "human" forms of AI is thus not an issue of philosophy. Simon then concludes, and I concur, "To explain a phenomenon is not to demean it. An astronomical theory ... or a chemical model ... does not lessen the fascination of the heavens at night or the beauty of the unfolding of a flower. Knowing how we think will not make us less admiring of good thinking."

Tools: Towards Integration

Traditional systematic approaches in AI include search and inference techniques, for example as used for planning and reasoning. It is important to remember that the main limit to systematic methods is that a complete search of a solution space can be as a practical matter impossible in a limited (useful) time, due to the vast number of combinations that must be examined.

Alternative methods that have been explored include hill climbing, genetic algorithms and connectionist methods. To evaluate the direction of research and methods acceptable by AI, let us come back again to the IJCAI’95. It discussed how recent insights into computationally hard problems have led to the development of new stochastic search methods.

The panel "Systematic Versus Stochastic Constraint Satisfaction" examined the direction of research and methods traditionally used in AI. Edward Tsang of the Department of Computer Science of the University of Essex, argued that stochastic methods have the advantage that they can be terminated in a finite predetermined time, are enhanced by the newer hardware, and have potential to tackle constraint optimization problems.

Bart Selman of the AT&T Bell Laboratories, in his interesting invited talk ("Stochastic Search and Phase Transitions: AI Meets Physics"), reported that in the study of Boolean satisfiability, he used an analogy with phase transition in physics and observed that there is a phase transition from the mostly satisfiable to the mostly unsatisfiable. Therefore he concluded that there is an advantage in using a combination of stochastic (incomplete) and systematic (compete) methods: a mixture of one stochastic method (for model finding) and one systematic method (for theorem proving).

Some of the considerations which must be taken into account in choosing stochastic methods include reliability, so-called "near optimality", and the risk of missing a solution. The main message from the discussion by Selman was that stochastic methods now offer available and useful alternative to traditional systematic methods in AI.

Collecting Data and Knowledge Bases

The question of how to collect data and knowledge and which technique has to be used in each case is still one of the most practical questions in AI. The panel "Very Large Knowledge bases - Architecture vs. Engineering" focused on comparing the imperatives for collecting knowledge using different techniques. They considered five different approaches, the most favorable at that time.

First, regarding large lexicons in machine translation and NL projects, J. Carbonell of Carnegie Mellon University said that the focus should be on the building of learning architectures rather than on the building of truly massive knowledge bases. Second, the use of parallel supercomputers in the support of massive knowledge/data bases J. Hendler of the University Maryland focused on computational architectures that will support massive knowledge bases and hybrid knowledge/data bases and allow rapid access to them.