By J. Fernando Vega Riveros
I’ll jump start our written dialogs by trying to find a definition of intelligence from the artificial intelligence (AI) point of view, so I will present a few definitions found in several textbooks of AI or related subjects.
Winston  jumps right in with a definition of artificial intelligence without much regard for natural intelligence by defining AI as “the study of the computations that make it possible to perceive, reason, and act.” He goes on by establishing the difference with psychology because of the greater emphasis on computation. Then, he contrasts AI with computer science because of the emphasis on perception, reasoning, and action. This differentiation seems to establish AI as a middle ground between psychology and computer science, or as a third territory which is neither, and apparently avoids discussions. Not much further down the text, Winston titles a section as “Artificial Intelligence Helps Us to Become More Intelligent”, which suggests his position of AI as a complement to human intelligence: AI as human intelligence-inspired science and technology. The diversity of topics and approaches in the book seem to reflect that vision. Topics range from traditional problem-solving approaches based on graph search, and logic knowledge representation and processing, which take up the largest chunk of the book, but adding towards the end connectionist and evolutionary models.
Jackson  defines artificial intelligence as “the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior – understanding language, learning, reasoning, solving problems, and so on.” Jackson in his definition states a position with a commitment to computer science, which at the outset contrasts with Winston’s middle ground position. The question that comes to mind is whether they hold different philosophical positions, or it is just a way to avoid clashes in subjects like psychology in which they may not be fully versed. However, by resorting to terms with strong ties with psychology and cognitive science like perception, language understanding, learning, and even reasoning the question arises: what do they mean by them?
It has been apparent to me over the years that the definition of artificial intelligence is taken lightly in the introductory chapters of some textbooks on the subject, disregarding the philosophical implications of such definitions. Giarratano and Riley  present what they call a popular definition of AI as “making computers think like humans”. The quotation marks are not mine but appear in their text, leading me to question why the quotation marks? Is it because that is not their definition? Since they do not have a bibliographic reference, I wonder whose definition that might be? Or is it that they do not want to commit to that definition? They claim this definition has its roots in the Turing test in which a human interrogator poses questions to a human and a machine trying to distinguish between them based on the answers provided by both. If the interrogator cannot tell apart the human from the machine, the machine passes the test  (Turing proposed this as the “imitation game” ). One could argue that the Turing test only tests that the machine produces answers indistinguishable from a human, but that would not imply that the machine thinks like a human; it acts like a human. After all, do we really know how humans think? This is why I say that the definitions of AI are taken lightly. And the gross interpretations and misunderstandings in this chapter continue to pile up. Giarratano and Riley claim that Steven [sic] Weizembaum’s program Eliza passed the Turing test in 1967, something hardly defensible taking into account what this software did, the techniques it used, and even the statements of its author . Besides, the Turing test set up has been greatly simplified in the Loebner contest, and many people claim that several programs have passed the Turing test simply because they have won this contest. This is not so; the Loebner contest, bronze medal, is won by the “most human-like computer” each year , not necessarily passing the Turing test. Only the gold medal would be given to a machine passing the test. Has any program received the gold medal yet? I could not find any specific information about that, and even then, the contest is strongly criticized for lack of scientific rigor.
The prologue in  states a more cautious position by defining Distributed Artificial Intelligence (DAI) as “the study, construction, and application of multiagent systems, that is, systems in which several interacting intelligent agents pursue some set of goals or perform some set of tasks.” Digging further into the prologue, we find a definition of an agent as a computational entity that perceives and acts upon the environment, and is autonomous in the sense that its behavior depends partially on its own experience. Worth noticing is the meaning of “intelligent” which they refer to as the ability of the agents to pursue their goals and execute their tasks in such a way to optimize some performance measures. What captures the attention in the subject of DAI are the topics related to the social aspects that may arise from the interaction among agents, where there is no centralized control, synchronization, or unique designer. The need for communication that encompasses protocols (social communication rules, agent etiquette?), shared knowledge representations (meanings?), and organization are a necessity. One expects a minimal set of agreements for DAI to function, but emergent behaviors beyond the direct control of the designers are expected. This possibility opens the door for some interesting discussions…
Russell and Norvig  instead of presenting a unique definition of AI resort to present multiple definitions of AI along two dimensions. One dimension is concerned with reasoning vs. behavior. The other dimension presents rationality vs. fidelity to human performance (the italics are Russell and Norvig’s). This results in four visions or approaches to AI. If we concern ourselves with reasoning only, Russell and Norvig describe two approaches: thinking rationally, and thinking humanly. Thinking rationally refers to what the authors call the “laws of thought” approach, mostly based on logic. The thinking humanly approach uses cognitive models, which according to the authors “is necessarily based on experimental investigation of actual humans or animals.” Analyzing AI approaches from the behavior perspective leaves two more combinations: acting rationally, and acting humanly. Acting rationally is based on the notion of rational agents which operate autonomously, perceive their environment, persist over a prolonged time period, adapt to change, and create and pursue goals. The description of this approach clearly shows strong similarities to the description of DAI . When the authors discuss the acting humanly approach, they resort to speak about the Turing test instead of providing a definition, in contrast to the discussions on the other three approaches. One interesting feature of the discussion on acting humanly is that in this case, as the authors say, the computer needs to process natural language, represent knowledge to store what it knows or hears, automated reasoning to use the stored information to answer questions and draw new conclusions, and machine learning to adapt to new circumstances. In the discussion about acting humanly, two statements in this book result particularly thought-provoking: “the quest for artificial flight succeeded when the Wright brothers and others stopped imitating birds”, and “aeronautical engineering texts do not define the goal of their field as making machines that fly so exactly like pigeons that they can fool even other pigeons”. These statements insinuate that acting humanly does not require imitating human processes, but as they say devoting the attention to studying the underlying principles of intelligence, which brings us back to the main problem of this post: defining intelligence.
Intelligence and its underlying principles are elusive matters of study no matter if we want to understand natural intelligence, or design artificially intelligent artifacts. Terms like perception, understanding, reasoning, knowledge, and learning pervade the definitions found in this sample of textbooks, and much of the literature on AI. I find that in general these terms are listed without regard for their full meaning outside computing.
The definition of intelligence has evolved over the years, and its underlying principles are not only a moving but a changing target of study. Probably the only sure thing we can assume is that humans are intelligent since we invented the term to describe ourselves. Nowadays we accept that other species may be intelligent. The question that the discussion about AI brings forward is whether human-designed artifacts may be or become intelligent. But to answer that question we need agreements among the many disciplines concerned with this problem.
 Winston, P.H. Artificial Intelligence. Addison-Wesley Pub. Comp. 3rd Ed. 1993.
 Jackson. P. Introduction to Expert Systems. Addison-Wesley Pub. Comp. 3rd Ed. 1998.
 Giarratano, J. C. and G. D. Riley. Expert Systems – Principles and Programming. Thomson Course Technology. 4th Ed. 2005.
 Russell, S. and P. Norvig. Artificial Intelligence – A Modern Approach. Prentice Hall. 3rd Ed. 2010.
 Turing, A. Computing machinery and intelligence. In Mind. October 1950. Vol. 59, No. 236, pp 433-460. (available at http://mind.oxfordjournals.org/content/LIX/236/433. Visited October 9, 2012).
 Weizembaum, J. Eliza – A computer program for the study of natural language communication between man and machine. In Communications of the ACM. Vol. 9 No. 1. January 1966. Pp 36-45. (available at http://www.cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf. Visited October 10, 2012).
 Home Page of the Loebner Competition in Artificial Intelligence – “The first Turing test” http://www.loebner.net/Prizef/loebner-prize.html.(Visited October 9, 2012).
 Weiss. G. editor. Multiagent Systems – A Modern Approach to Distributed Artificial Intelligence. The MIT Press. 1999.