>>>  Laatst gewijzigd: 22 maart 2022   >>>  Naar www.emo-level-8.nl  
Ik

Notities bij boeken

Start Filosofie Kennis Normatieve rationaliteit Waarden in de praktijk Mens en samenleving Techniek

Notities

Nilsson heeft zo'n vijftig jaar gewerkt in de AI en is dus - zoals hij zelf terecht zegt - een insider. Hij studeerde aan Stanford en is ook emeritus hoogleraar van Stanford, hij werkte daar aan het SRI International (voorheen Stanford Research Institute). Hij vond het in 2010 weer eens nodig dat er een nieuwe geschiedenis van de AI geschreven werd. Vandaar dit boek.

Het is onmogelijk om zo'n gedetailleerd historisch boek als dit goed weer te geven. Nilsson geeft een heel exact en over het algemeen bijzonder objectief beeld van de ontwikkelingen in de Artificiële Intelligentie. Ook allerlei technische details worden besproken. Heel prettig is dat Nilsson ook kritisch is en niet de hele tijd bij elke vinding in de AI staat te juichen zoals McCorduck neigt te doen: hij beschrijft mogelijkheden én beperkingen.

Ik let vooral op de filosofische kanten aan wat Nilsson beschrijft.

Voorkant Nilsson 'The quest for artificial intelligence' Nils J. NILSSON
The quest for artificial intelligence - A history of ideas and achievements
Cambridge: Cambridge University Press, 2010; 562 blzn.
ISBN-13: 978 05 2112 2931

(xiii) Preface

"For me, artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. According to that definition, lots of things – humans, animals, and some machines – are intelligent. Machines, such as “smart cameras,” and many animals are at the primitive end of the extended continuum along which entities with various degrees of intelligence are arrayed. At the other end are humans, who are able to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories." [mijn nadruk] (xiii)

"The book follows a roughly chronological approach, with some backing and filling. My story may have left out some actors and events, but I hope it is reasonably representative of AI’s main ideas, controversies, successes, and limitations. I focus more on the ideas and their realizations than on the personalities involved. I believe that to appreciate AI’s history, one has to understand, at least in lay terms, something about how AI programs actually work." [mijn nadruk] (xiv)

(1) Part I - Beginnings

(3) 1 - Dreams and Dreamers

Overzicht van (dromen over) machines die menselijke dingen moesten kunnen, waaronder de automata.

"The quest for artificial intelligence, quixotic or not, begins with dreams like these. But to turn dreams into reality requires usable clues about how to proceed. Fortunately, there were many such clues, as we shall see."(8)

(10) 2 - Clues

2.1 From Philosophy and Logic

"Clues about what might be needed to make machines intelligent are scattered abundantly throughout philosophy, logic, biology, psychology, statistics, and engineering. With gradually increasing intensity, people set about to exploit clues from these areas in their separate quests to automate some aspects of intelligence. I begin my story by describing some of these clues and how they inspired some of the first achievements in artificial intelligence."(10)

"Aristotle’s logic provides two clues to how one might automate reasoning. First, patterns of reasoning, such as syllogisms, can be economically represented as forms or templates. These use generic symbols, which can stand for many different con- crete instances. Because they can stand for anything, the symbols themselves are unimportant." [mijn nadruk] (10)

"However, Leibniz’s work provided important additional clues to how reasoning might be mechanized: Invent an alphabet of simple symbols and the means for combining them into more complex expressions." [mijn nadruk] (11)

"Boole considered various logical principles of human reasoning and represented them in mathematical form." [mijn nadruk] (13)

"Boole’s work showed that some kinds of logical reasoning could be performed by manipulating equations representing logical propositions – a very important clue about the mechanization of reasoning. An essentially equivalent, but not algebraic, system for manipulating and evaluating propositions is called the “propositional calculus” (often called “propositional logic”), which, as we shall see, plays a very important role in artificial intelligence."(14)

"Frege’s system was the forerunner of what we now call the “predicate calculus,” another important system in artificial intelligence. It also foreshadows another representational form used in present-day artificial intelligence: semantic networks. Frege’s work provided yet more clues about how to mechanize reasoning processes. At last, sentences expressing information to be reasoned about could be written in unambiguous, symbolic form." [mijn nadruk] (14-15)

2.2 From Life Itself

"Several aspects of “life” have, in fact, provided important clues about intelligence. Because it is the brain of an animal that is responsible for converting sensory information into action, it is to be expected that several good ideas can be found in the work of neurophysiologists and neuroanatomists who study brains and their fundamental components, neurons. Other ideas are provided by the work of psychologists who study (in various ways) intelligent behavior as it is actually happening. And because, after all, it is evolutionary processes that have produced intelligent life, those processes too provide important hints about how to proceed."(15)

"In 1943, the American neurophysiologist Warren McCulloch (1899–1969; Fig. 2.7) and logician Walter Pitts (1923–1969) claimed that the neuron was, in essence, a “logic unit.” In a famous and important paper they proposed simple models of neurons and showed that networks of these models could perform all possible computational operations." [mijn nadruk] (17)

"The Canadian neuropsychologist Donald O. Hebb (1904–1985) also believed that neurons in the brain were the basic units of thought."(17)

"Cognitive science attempted to explicate internal mental processes using ideas such as goals, memory, task queues, and strategies without (at least during its beginning years) necessarily trying to ground these processes in neurophysiology. Cognitive science and artificial intelligence have been closely related ever since their beginnings. Cognitive science has provided clues for AI researchers, and AI has helped cognitive science with newly invented concepts useful for understanding the workings of the mind." [mijn nadruk] (22)

"Motivated by the success of biological evolution in producing complex organisms, some researchers began thinking about how programs could be evolved rather than written." [mijn nadruk] (22)

"Researchers would ultimately come to recognize that all of these evolutionary methods were elaborations of a very useful mathematical search strategy called “gradient ascent” or “hill climbing.”"(23)

"At a symposium in 1960, Major Jack E. Steele, of the Aerospace Division of the United States Air Force, used the term “bionics” to describe the field that learns lessons from nature to apply to technology.(...) Today, the word “bionics” is concerned mainly with orthotic and prosthetic devices, such as artificial cochleas, retinas, and limbs. Nevertheless, as AI researchers continue their quest, the study of living things, their evolution, and their development may continue to provide useful clues for building intelligent artifacts." [mijn nadruk] (25)

2.3 From Engineering

"Sensing the environment and then letting what is sensed influence what a machine does is critical to intelligent behavior. Grey Walters’s “tortoises,” for example, had photocells that could detect the presence or absence of light in their environments and act accordingly. Thus, they seem more intelligent than a Jacquard loom or clockwork automata. One of the simplest ways to allow what is sensed to influence behavior involves what is called “feedback control.”" [mijn nadruk] (27)

"Another source of ideas, loosely associated with cybernetics and bionics, came from studies of “self-organizing systems.”" [mijn nadruk] (28)

"Because nearly all reasoning and decision making take place in the presence of uncertainty, dealing with uncertainty plays an important role in the automation of intelligence. Attempts to quantify uncertainty and “the laws of chance” gave rise to statistics and probability theory. What would turn out to be one of the most important results in probability theory, at least for artificial intelligence, is Bayes’s rule, which I’ll define presently in the context of an example." [mijn nadruk] (29)

"In the twentieth century, scientists and statisticians such as Karl Pearson (1857– 1936), Sir Ronald A. Fisher (1890–1962), Abraham Wald (1902–1950), and Jerzey Neyman (1894–1981) were among those who made important contributions to the use of statistical and probabilistic methods in estimating parameters and in making decisions. Their work set the foundation for some of the first engineering applications of Bayes’s rule, such as the one I just illustrated, namely, deciding which, if any, of two or more electrical signals is present in situations where noise acts to obscure the signals." [mijn nadruk] (30-31)

[Hierna veel geschiedenis van de computertechniek. Zie hiervoor samenvattingen bij dat thema.]

"However, to explore the ideas inherent in most of the clues from logic, from neurophysiology, and from cognitive science, more powerful engines would be required. While McCulloch, Wiener, Walter, Ashby, and others were speculating about the machinery of intelligence, a very powerful and essential machine bloomed into existence – the general-purpose digital computer. This single machine provided the engine for all of these ideas and more. It is by far the dominant hardware engine for automating intelligence." [mijn nadruk] (31)

"Even before people actually started building computers, several logicians and mathematicians in the 1930s pondered the problem of just what could be computed."(33)

"The commanding importance of the stored-program digital computer derives from the fact that it can be used for any purpose whatsoever – that is, of course, any computational purpose. The modern digital computer is, for all practical purposes, such a universal machine. The “all-practical-purposes” qualifier is needed because not even modern computers have the infinite storage capacity implied by Turing’s infinite tape. However, they do have prodigious amounts of storage, and that makes them practically universal." [mijn nadruk] (36)

"Allen Newell and Herb Simon (see Fig. 2.22) were among those who had no trouble believing that the digital computer’s universality meant that it could be used to mechanize intelligence in all its manifestations – provided it had the right software." [mijn nadruk] (40)

"Inspired by the clues we have mentioned and armed with the general-purpose digital computer, researchers began, during the 1950s, to explore various paths toward mechanizing intelligence. With a firm belief in the symbol system hypothesis, some people began programming computers to attempt to get them to perform some of the intellectual tasks that humans could perform. Around the same time, other researchers began exploring approaches that did not depend explicitly on symbol processing. They took their inspiration mainly from the work of McCulloch and Pitts on networks of neuron-like units and from statistical approaches to decision making. A split between symbol-processing methods and what has come to be called “brain-style” and “nonsymbolic” methods still survives today." [mijn nadruk] (41)

(47) Part II - Early Explorations: 1950s and 1960s

"If machines are to become intelligent, they must, at the very least, be able to do the thinking-related things that humans can do. The first steps then in the quest for artificial intelligence involved identifying some specific tasks thought to require intelligence and figuring out how to get machines to do them. Solving puzzles, playing games such as chess and checkers, proving theorems, answering simple questions, and classifying visual images were among some of the problems tackled by the early pioneers during the 1950s and early 1960s. Although most of these were laboratory-style, sometimes called “toy,” problems, some real-world problems of commercial importance, such as automatic reading of highly stylized magnetic characters on bank checks and language translation, were also being attacked." [mijn nadruk] (47)

(49) 3 - Gatherings

"In September 1948, an interdisciplinary conference waas held at the California Institute of Technology (Caltech) in Pasadena, California, on the topics of how the nervous system controls behavior and how the brain might be compared to a computer. It was called the Hixon Symposium on Cerebral Mechanisms in Behavior. Several luminaries attended and gave papers, among them Warren McCulloch, John von Neumann, and Karl Lashley (1890–1958), a prominent psychologist. Lashley gave what some thought was the most important talk at the symposium. He faulted behaviorism for its static view of brain function and claimed that to explain human abilities for planning and language, psychologists would have to begin considering dynamic, hierarchical structures. Lashley’s talk laid out the foundations for what would become cognitive science." [mijn nadruk] (49)

Drie andere bijeenkomsten werden de basis voor het ontstaan van de AI.

3.1 Session on Learning Machines

Over neurale netwerken, patroonherkenning, en schaken.

3.2 The Dartmouth Summer Project

[Al goed beschreven in McCorduck. De term 'artificiële intelligentie' is toch wel wat toevallig ontstaan, zo blijkt. Meer als een soort van verzet tegen cybernatica van Wiener en andere theorieën van die tijd. Het idee: een machine dingen laten doen die we bij mensen 'intelligent' zouden noemen, simulatie van menselijke intelligentie in een machine dus.]

3.3 Mechanization of Thought Processes

"The proceedings of this conference contains some papers that became quite influential in the history of artificial intelligence. Among these, I’ll mention ones by Minsky, McCarthy, and Selfridge."(56)

"I have already mentioned the 1955 pattern-recognition work of Oliver Selfridge. At the 1958 Teddington Symposium, Selfridge presented a paper on a new model for pattern recognition (and possibly for other cognitive tasks also). He called it “Pandemonium,” meaning the place of all the demons." [mijn nadruk] (57)

(62) 4 - Pattern Recognition

"Most of the attendees of the Dartmouth Summer Project were interested in mimicking the higher levels of human thought. Their work benefitted from a certain amount of introspection about how humans solve problems. Yet, many of our mental abilities are beyond our power of introspection. We don’t know how we recognize speech sounds, read cursive script, distinguish a cup from a plate, or identify faces. We just do these things automatically without thinking about them. Lacking clues from introspection, early researchers interested in automating some of our perceptual abilities based their work instead on intuitive ideas about how to proceed, on networks of simple models of neurons, and on statistical techniques. Later, workers gained additional insights from neurophysiological studies of animal vision." [mijn nadruk] (62)

"In this chapter, I’ll describe work during the 1950s and 1960s on what is called “pattern recognition.” This phrase refers to the process of analyzing an input image, a segment of speech, an electronic signal, or any other sample of data and classifying it into one of several categories. For character recognition, for example, the categories would correspond to the several dozen or so alphanumeric characters."(62)

4.1 Character Recognition

"This field came to be known as “optical character recognition.”"(62)

4.2 Neural Networks

"Rosenblatt defined several types of perceptrons."(67)

"Independently of Rosenblatt, a group headed by Stanford Electrical Engineering Professor Bernard Widrow (1929– ) was also working on neural-network systems during the late 1950s and early 1960s."(69)

"Work on the MINOS systems was supported primarily by the U.S. Army Signal Corps during the period 1958 to 1967. The objective of the MINOS work was “to conduct a research study and experimental investigation of techniques and equip- ment characteristics suitable for practical application to graphical data processing for military requirements.” The main focus of the project was the automatic recognition of symbols on military maps. Other applications – such as the recognition of military vehicles, such as tanks, on aerial photographs and the recognition of hand-printed characters – were also attempted."(70)

[Ook Nilsson schrijft dit soort dingen op op een manier die het blijkbaar vanzelfsprekend vindt dat allerlei onderzoek door defensie wordt betaald en gericht is op militaire doelen. Geen woord van twijfel of kritiek op dat soort dingen.]

"Expanding its interests beyond neural networks, the Learning Machines Group ultimately became the SRI Artificial Intelligence Center, which continues today as a leading AI research enterprise."(73)

4.3 Statistical Methods

4.4 Applications of Pattern Recognition to Aerial Reconnaissance

"Most of the pattern recognition work done at Philco in the 1960s was sponsored by the DoD [Department of Defense], and the reports were not available for public distribution."(77)

[Tja, dat krijg je ervan, nietwaar, alles meteen 'top secret' en zo. Niet bepaald stimulerend voor andere onderzoekers natuurlijk, en volkomen strijdig met wat wetenschappelijk onderzoek moet zijn.]

"Approaches to AI problems involving neural networks and statistical techniques came to be called “nonsymbolic” to contrast them with the “symbol-processing” work being pursued by those interested in proving theorems, playing games, and problem solving. These nonsymbolic approaches found application mainly in pattern recognition, speech processing, and computer vision."(77)

(81) 5 - Early Heuristic Programs

5.1 The Logic Theorist and Heuristic Search

"Transforming structures of symbols and searching for an appropriate problemsolving sequence of transformations lies at the heart of Newell and Simon’s ideas about mechanizing intelligence."(81)

"Using heuristics keyed to the problem being solved became a major theme in artificial intelligence, giving rise to what is called “heuristic programming.” Perhaps the idea of heuristic search was already “in the air” around the time of the Dartmouth workshop. It was implicit in earlier work by Claude Shannon."(83-84)

5.2 Proving Theorems in Geometry

5.3 The General Problem Solver

"At the same 1959 Paris conference where Gelernter presented his program, Allen Newell, J. C. Shaw, and Herb Simon gave a paper describing their recent work on mechanizing problem solving. Their program, which they called the “General Problem Solver (GPS),” was an embodiment of their ideas about how humans solve problems. Indeed, they claimed that the program itself was a theory of human problem-solving behavior. Newell and Simon were among those who were just as interested (perhaps even more interested) in explaining the intelligent behavior of humans as they were in building intelligent machines. They wrote “It is often argued that a careful line must be drawn between the attempt to accomplish with machines the same tasks that humans perform, and the attempt to simulate the processes humans actually use to accomplish these tasks. . . . GPS maximally confuses the two approaches – with mutual benefit.”" [mijn nadruk] (87-88)

"I have already mentioned some of the early work of Shannon and of Newell, Shaw, and Simon on programs for playing chess. Playing excellent chess requires intelligence. In fact, Newell, Shaw, and Simon wrote that if “one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavor.”" [mijn nadruk] (89)

[Hm, daar begint dus de onhelderheid. De tegenoverstelling in het eerste citaat is al problematisch. Maar over schaken denken als "de kern van het intellectuele streven van mensen" zegt alles over een ontzettend beperkte visie die denken beperkt tot formele en logische processen.]

(96) 6 - Semantic Representations

"In the early 1960s, several Ph.D. research projects, some performed under Minsky’s direction at MIT, began to employ more complex symbol structures in programs for performing various intellectual tasks. Because of their rich, articulated content of information about their problem topic, these structures were called semantic representations."(96)

6.1 Solving Geometric Analogy Problems

6.2 Storing Information and Answering Questions

"Because [Bertram - GdG] Raphael wanted his system [SIR - GdG] to communicate with people, he wanted its input and output languages to be “reasonably close to natural English.” He recognized that “the linguistic problem of transforming natural language input into a usable form will have to be solved before we obtain a general semantic information retrieval system.” This “linguistic problem” is quite difficult and still not “solved” even though much progress has been made since the 1960s."(98)

"The exception principle was studied by AI researchers in much more detail later and led to what is called default reasoning and nonmonotonic logics, as we shall see."(100)

6.3 Semantic Networks

"SIR was an early version of what would become an important representational idea in artificial intelligence, namely, semantic networks."(100)

(103) 7 - Natural Language Processing

"Natural languages are spoken as well as written. And, because speech sounds are not as well segmented as are the characters printed on a page, speech understanding presents additional difficulties, which I’ll describe in a later chapter.
The inverse of understanding natural language input is generating natural language output – both written and spoken. Translating from one language to another involves both understanding and generation. So does carrying on a conversation. All of these problems – understanding, generation, translation, and conversing – fall under the general heading of “natural language processing” (sometimes abbreviated as NLP)."(103)

7.1 Linguistic Levels

"The semantics level helps to determine the meaning (or the meaninglessness) of a sentence by employing logical analyses. For example, through semantic analysis, an “idea” can’t be both “colorless” and “green.”" [mijn nadruk] (104)

[Dat lijkt me een erg beperkte kijk op semantiek.]

"Nevertheless, even the most complex grammars can’t cleanly distinguish between sentences we would accept as grammatically correct and those we would not. I will return to this difficulty and one way to deal with it in a later chapter." [mijn nadruk]

"During the late 1950s and throughout most of the 1960s and beyond, syntactic analysis was more highly developed than was semantics."(106)

7.2 Machine Translation

"In June 1952 at MIT, Yehoshua Bar-Hillel (1915–1975), an Israeli logician who was then at MIT’s Research Laboratory for Electronics, organized the first conference devoted to machine translation. Originally optimistic about the possibilities, Bar-Hillel was later to conclude that full automatic translation was impossible."(125)

"As later researchers would finally concede, Bar-Hillel was right about his claim that highly competent natural language processing systems (indeed, broadly competent AI systems in general) would need to have encyclopedic knowledge. However, most AI researchers would disagree with him about the futility of attempting to give computers the required encyclopedic knowledge. Bar-Hillel was well known for being a bit of a nay-sayer regarding artificial intelligence."(109)

7.3 Question Answering

(114) 8 - 1960s’ Infrastructure

"New computer languages made it much easier to build AI systems. Researchers from mathematics, from cognitive science, from linguistics, and from what soon would be called “computer science” came together in meetings and in newly formed laboratories to attack the problem of mechanizing intelligent behavior. In addition, government agencies and companies, concluding that they had an important stake in this new enterprise, provided needed research support."(114)

8.1 Programming Languages

"Ultimately, however, McCarthy concluded a new language was needed that was easier to use than IPL and more powerful than FLPL. Starting in the fall of 1958 at MIT, McCarthy began the implementation of a programming language he called LISP (for list processing). He based it (loosely) on a branch of mathematics of special interest in computation called recursive function theory. (...) Because it was easier to use, LISP soon replaced IPL as the primary language of artificial intelligence research and applications."(115)

8.2 Early AI Laboratories

"Since their early days, the groups at CMU, MIT, and Stanford have been among the leaders of research in AI. Often graduates of one of these institutions became faculty members of one of the other ones. Around 1965, another world-class AI center emerged at the University of Edinburgh in Scotland. Its founder was Donald Michie (1923–2007; Fig. 8.2), who had worked with Alan Turing and I. J. (Jack) Good at Bletchley Park during the Second World War."(117)

8.3 Research Support

"As the computing systems needed for AI research became larger and more expensive, and as AI laboratories formed, it became necessary to secure more financial support than was needed in the days when individual investigators began work in the field. Two of the major sources of funding during the late 1950s and early 1960s were the Office of Naval Research (ONR) and the Advanced Research Projects Agency (ARPA), each a part of the U.S. defense establishment."(118)

"ARPA funds helped to establish “centers of excellence” in computer science. Besides MIT, these centers included Stanford, Carnegie Mellon, and SRI. ARPA also supported computer science work at the RAND Corporation, the Systems Development Corporation, and BBN, among others. AI was just one of ARPA’s interests. IPTO also supported research that led to graphical user interfaces (and the mouse), supercomputing, computer hardware and very-large-scale integrated circuits (VLSI), and, perhaps most famously, research that led to the Internet. According to Licklider, “ARPA budgets did not even include AI as a separate line item until 1968.”"(120)

"Later, ARPA was renamed DARPA (for Defense Advanced Research Projects Agency) to emphasize its role in defense-related research. DARPA projects and grants were typically much larger than those of ONR and allowed the purchase of computers and other equipment as well as support for personnel. It’s hardly an exaggeration to say that a good part of today’s computer-based infrastructure is the result of DARPA research support."(120)

8.4 All Dressed Up and Places to Go

(123) Part III - Efflorescence: Mid-1960s to Mid-1970s

"Many of us were just as optimistic about success as Herb Simon and Marvin Minsky were when they made their predictions about rapid progress. AI entered a period of flowering that led to many new and important inventions. Several ideas originated in the context of Ph.D. dissertation research projects. Others emerged from research laboratories and from individual investigators wrestling with theoretical problems. In this part, I’ll highlight some of the important projects and research results." [mijn nadruk] (123)

(125) 9 - ComputerVision

"Because the images are two-dimensional projections of a three-dimensional scene, the imaging process loses information. That is, different three-dimensional scenes might produce the same two-dimensional image. Thus, the problem of reconstructing the scene faithfully from an image is impossible in principle."(125)

9.1 Hints from Biology

9.2 Recognizing Faces

"Face recognition programs of the 1960s and 1970s had several limitations. They usually required that images be of faces of standard scale, pose, expression, and illumination. Toward the end of the book, I’ll describe research leading to much more robust automatic face recognition."(128)

9.3 Computer Vision of Three-Dimensional Solid Objects

(141) 10 - “Hand–Eye”Research

"The motivation for much of the computer vision research that I have described during this period was to provide information to guide a robot arm. Because the images that could be analyzed best were of simple objects such as toy blocks, work was concentrated on getting a robot arm to stack and unstack blocks. I’ll describe some typical examples of this “hand–eye” research, beginning with a project that did not actually involve an “eye.”"(141)

10.1 At MIT

"The system depended on precise illumination and carefully constructed blocks. Attempts to extend the range of computer vision to less constrained scenes led to further concentration at MIT and elsewhere on the early stages of vision processing. I’ll describe some of the ensuing work on these problems in more detail later."(142)

10.2 At Stanford

10.3 In Japan

10.4 Edinburgh’s “FREDDY”

"Hand–eye research at Edinburgh was suspended during the mid-1970s in part owing to an unfavorable assessment of its prospects in a study commissioned by the British Science Research Council. (I’ll have more to say about that assessment later.)"(147)

(149) 11 - Knowledge Representation and Reasoning

"In AI research (and in computer science generally), procedural knowledge is represented directly in the programs that use that knowledge, whereas declarative knowledge is represented in symbolic structures that are more-or-less separate from the many different programs that might use the information in those structures. Examples of declarative-knowledge symbol structures are those that encode logical statements (such as those McCarthy advocated for representing world knowledge) and those that encode semantic networks (such as those of Raphael or Quillian). Typically, procedural representations, specialized as they are to particular tasks, are more efficient (when performing those tasks), whereas declarative ones, which can be used by a variety of different programs, are more generally useful. In this chapter, I’ll describe some of the ideas put forward during this period for reasoning with and for representing declarative knowledge." [mijn nadruk] (149)

11.1 Deductions in Symbolic Logic

11.2 The Situation Calculus

11.3 Logic Programming

"Colmerauer and his Ph.D. student, Philippe Roussel, were the ones who developed, in 1972, the new programming language, PROLOG. (Roussel chose the name as an abbreviation for “PROgrammation en LOGique.”) In PROLOG, programs consist of an ordered sequence of logical statements. The exact order in which these statements are written, along with some other constructs, is the key to efficient program execution."(153)

11.4 Semantic Networks

11.5 Scripts and Frames

(162) 12 - Mobile Robots

"Beginning in the mid-1960s, several groups began working on mobile robots. These included the AI Labs at SRI and at Stanford. I’ll begin with an extended description of the SRI robot project for it provided the stimulus for the invention and integration of several important AI technologies."(162)

12.1 Shakey, the SRI Robot

"Rosen and I and others in his group immediately began thinking about mobile robots. We also enlisted Marvin Minsky as a consultant to help us. Minsky spent two weeks at SRI during August 1964. We made the first of many trips to the ARPA office (in the Pentagon at that time) to generate interest in supporting mobile robot research at SRI. We also talked with Ruth Davis, the director of the Department of Defense Research and Engineering (DDR&E) – the office in charge of all Defense Department research." [mijn nadruk] (162)

[Zo vanzelfsprekend blijkbaar, in de VS: je onderzoek laten betalen door defensie ... ]

"The Shakey project involved the integration of several new inventions in search techniques, in robust control of actions, in planning and learning, and in vision. Many of these ideas are widely used today. The next few subsections describe them." [mijn nadruk] (165)

"Shakey was the first robot system having the abilities to plan, reason, and learn; to perceive its environment using vision, range-finding, and touch sensors; and to monitor the execution of its plans. It was, perhaps, a bit ahead of its time. Much more research (and progress in computer technology generally) would be needed before practical applications of robots with abilities such as these would be feasible. We mentioned some of the limiting assumptions that were being made by robot research projects at that time in one of our reports about Shakey:
Typically, the problem environment [for the robot] is a dull sort of place in which a single robot is the only agent of change – even time stands still until the robot moves. The robot itself is easily confused; it cannot be given a second problem until it finishes the first, even though the two problems may be related in some intimate way. Finally, most robot systems cannot yet generate plans containing explicit conditional statements or loops.
Even though the SRI researchers had grand plans for continuing work on Shakey, DARPA demurred, and the project ended in 1972. This termination was unfortunate, because work on planning, vision, learning, and their integration in robot systems had achieved a great deal of momentum and enthusiasm among SRI researchers. Furthermore, several new ideas for planning and visual perception were being investigated. Many of these were described in detail in a final report for the Shakey project." [mijn nadruk] (175)

[Ik vraag me dan toch af waarom zo'n succesvol project niet meer financieel ondersteund werd door DARPA. Omdat er geen militaire toepassingen in zicht waren?]

12.2 The Stanford Cart

"Along with Shakey, the Stanford Cart resides in the Computer History Museum in Mountain View, California. They were the progenitors of a long line of robot vehicles, which will be described in subsequent chapters."(177)

(181) 13 - Progress in Natural Language Processing

13.1 Machine Translation

"Systran has evolved to be one of the main automatic translation systems. It is marketed by the Imageforce Corporation in Tampa, Florida. How well does Systran translate? It all depends on how one wants to measure performance. Margaret Boden mentions two measures, namely, “intelligibility” and “correctness.” Both of these measures depend on human judgement."(181)

13.2 Understanding

"Although the late 1960s and early 1970s might have been a “quiet decade” for machine translation, it was a very active period for other NLP work. Researchers during these years applied much more powerful syntactic, semantic, and inference abilities to the problem of understanding natural language."(182)

"Perhaps the NLP achievement that caused the greatest excitement was the SHRDLU natural language dialog system programmed by Terry Winograd (1946– ; Fig. 13.1) for his Ph.D. dissertation (under Seymour Papert) at MIT."(182)

"The success of SHRDLU fueled a debate among AI researchers about the pros and cons of these two knowledge representation strategies – procedural versus declarative. Actually, the use of LISP to represent procedures blurs this distinction to some extent because, as Winograd pointed out, “LISP allows us to treat programs as data and data as programs.” So, even though SHRDLU’s knowledge was represented procedurally, it was able to incorporate some declarative new knowledge (presented to it as English sentences) into its procedures." [mijn nadruk] (185)

"SHRDLU’s performance was indeed quite impressive and made some natural language researchers optimistic about future success. However, Winograd soon abandoned this line of research in favor of pursuing work devoted to the interaction of computers and people. Perhaps because he had first-hand experience of how much knowledge was required for successful language understanding in something so simple as the blocks world, he despaired of ever giving computers enough knowledge to duplicate the full range of human verbal competence. In a 2004 e-mail, Winograd put SHRDLU’s abilities in context with those of humans:
There are fundamental gulfs between the way that SHRDLU and its kin operate, and whatever it is that goes on in our brains. I don’t think that current research has made much progress in crossing that gulf, and the relevant science may take decades or more to get to the point where the initial ambitions become realistic. In the meantime AI took on much more doable goals of working in less ambitious niches, or accepting less-than-human results (as in translation).
" [mijn nadruk] (185)

["Perhaps"? Is dat dan nooit aan hem gevraagd? ]

"The systems developed by researchers such as Winograd, Woods, Bobrow, and their colleagues were very impressive steps toward conversing with computers in English. Yet, there was still a long way to go before natural language understanding systems could perform in a way envisioned by Winograd in the preface to his Ph.D. dissertation:
Let us envision a new way of using computers so they can take instructions in a way suited to their jobs. We will talk to them just as we talk to a research assistant, librarian, or secretary, and they will carry out our commands and provide us with the information we ask for. If our instructions aren’t clear enough, they will ask for more information before they do what we want, and this dialog will all be in English."(190)

(193) 14 - Game Playing

De ontwikkeling van dam- en schaakprogramma's.

"These years, the late 1960s through the mid-1970s, saw computer chess programs gradually improving from beginner-level play to middle-level play. Work on computer chess during the next two decades would ultimately achieve expert-level play, as we shall see in a subsequent chapter. Despite this rapid progress, it was already becoming apparent that there was a great difference between how computers played chess and how humans played chess."(194)

(197) 15 - The Dendral Project

"Embedding the knowledge of experts in AI programs led to the development of many “expert systems,” as we shall see later. It also led to increased concentration on specific and highly constrained problems and away from focusing on the general mechanisms of intelligence, whatever they might be."(200)

(201) 16 - Conferences, Books, and Funding

"The first large conference devoted exclusively to artificial intelligence was held in Washington, DC, in May 1969. Organized by Donald E. Walker (1928–1993) of the MITRE Corporation and Alistair Holden (1930–1999) of the University of Washington, it was called the International Joint Conference on Artificial Intelligence (IJCAI). It was sponsored by sixteen different technical societies (along with some of their subgroups) from the United States, Europe, and Japan. About 600 people attended the conference, and sixty-three papers were presented by authors from nine different countries. The papers were collected in a proceedings volume, which was made available at the conference to all of the attendees."(202)

[Waaraan je kunt zien dat de AI eind 60er jaren al erg internationaal was geworden.]

Naast die conferenties en hun 'proceedings' verschenen er heel wat papers, tijdschriften, handboeken, waren er workshops etc.

"These years saw the United States engaged in war in Vietnam, and Congress wanted to make sure that research supported by the U.S. Defense Department was relevant to military needs. Responding to these pressures, on November 19, 1969, Congress passed the “Mansfield Amendment” to the Defense Procurement Authorization Act of 1970 (Public Law 91-121), which required that the Defense Department restrict its support of basic research to projects “with a direct and apparent relationship to a specific military function or operation.” On March 23, 1972, the Advanced Research Projects Agency was renamed the Defense Research Advanced Projects Agency (DARPA) to reflect its emphasis on projects that contributed to enhanced military capabilities." [mijn nadruk] (203)

[Typisch ... ]

"DARPA’s shift to shorter term applied research, together with the Lighthill report and criticisms from various onlookers, posed difficulties for basic AI research during the next few years. Nevertheless, counter to Lighthill’s assessment, many AI techniques did begin to find application to real problems, launching a period of expansion in AI applications work, as we’ll see in the next few chapters."(204)

(207) Part IV - Applications and Specializations: 1970s to Early 1980s

(209) 17 - Speech Recognition and Understanding Systems

17.1 Speech Processing

"By speech recognition is meant the process of converting an acoustic stream of speech input, as gathered by a microphone and associated electronic equipment, into a text representation of its component words. This process is difficult because many acoustic streams sound similar but are composed of quite different words. (Consider, for example, the spoken versions of “There are many ways to recognize speech,” and “There are many ways to wreck a nice beach.”) Speech understanding, in contrast, requires that what is spoken be understood. An utterance can be said to be understood if it elicits an appropriate action or response, and this might even be possible without recognizing all of its words." [mijn nadruk] (209)

17.2 The Speech Understanding Study Group

17.3 The DARPA Speech Understanding Research Program

17.4 Subsequent Work in Speech Recognition

[Vooral technische uiteenzettingen. ]

(224) 18 - ConsultingSystems

18.1 The SRI Computer-Based Consultant

"As my colleagues and I at SRI cast about for ways to continue our planning and vision research we had been doing under the “Shakey the Robot” project, while satisfying DARPA’s interest in militarily relevant applications, we hit upon the problem of equipment maintenance, repair, and training. We pointed out that any technology that could reduce expenditures for these items and lessen the need for utilizing scarce human experts would be extremely important to the military. Furthermore, we said, this need “cannot be satisfied merely by writing more and better manuals. A sophisticated computer system seems to us essential.”" [mijn nadruk] (224)

[Waaraan je ziet dat je dan bezig bent je aan militaire doelen en behoeften aan te passen - gericht op nut voor de korte termijn - in plaats van je eigen belangstelling te volgen of die van het probleem waarmee je worstelt. Daarom moet je onafhankelijk onderzoek nastreven en je geld niet halen op een manier die je onafhankelijkheid aantast.]

18.2 Expert Systems

Over MYCIN en PROSPECTOR.

"More expert systems are described in the book The Rise of the Expert Company. In an appendix to that book, Paul Harmon lists over 130 expert systems in use during the mid- to late 1980s"(239)

(244) 19 - Understanding Queries and Signals

19.1 The Setting

Dat was de geschilderde omslag bij DARPA.

"DARPA program officers Floyd Hollister and Col. David Russell were able to persuade DARPA management that text-based, natural language access to large, distributed databases would be an important component of command and control systems.(...)
A second area of great importance in command and control was automating the analysis of aerial photos. Spotting targets of military interest in these photos, such as roads, bridges, and military equipment, typically required hours of effort by intelligence analysts. Because techniques being developed by researchers in computer vision might provide tools to help human analysts, DARPA had good reasons to continue funding computer vision research."(246-247)

19.2 Natural Language Access to Computer Systems

19.3 HASP/SIAP

(258) 20 - Progress in Computer Vision

20.1 Beyond Line-Finding

20.2 Finding Objects in Scenes

20.3 DARPA’s Image Understanding Program

"In 1976, DARPA launched its Image Understanding (IU) program. It grew to be a major effort composed of the leading research laboratories doing work in this area as well as “teams” pairing a university with a company. The individual labs participating were those at MIT, Stanford, University of Rochester, SRI, and Honeywell. The university/industry teams were USC–Hughes Research Laboratories, University of Maryland–Westinghouse, Inc., Purdue University–Honeywell, Inc., and CMU– Control Data Corporation."(267)

"As a growing subspecialty of artificial intelligence, papers on computer vision began to appear in new journals devoted to the subject, including Computer Vision and Image Understanding and IEEE Transactions on Pattern Analysis and Machine Intelligence. The field’s textbooks around this time included Pattern Classification and Scene Analysis and two books titled Computer Vision."(269)

(271) 21 - Boomtimes

"I think of the decade of roughly 1975–1985 as “boomtimes” for AI. Even though the boom was followed by a period of retrenchment, its accomplishments were many and important. It saw the founding in 1980 of the American Association for Artificial Intelligence (AAAI – now called the Association for the Advancement of Artificial Intelligence), with annual conferences, workshops, and symposia. Several other national and regional AI organizations were also formed. The Arpanet, which had its beginnings at a few research sites in the late 1960s, gradually evolved into the Internet, linking computers worldwide."(271)

"Reporting on this increasing interest in 1984, the science writer George Johnson wrote
“We’ve built a better brain,” exclaimed a brochure for [an expert system called] TIMM, The Intelligent Machine Model: “Expert systems reduce waiting time, staffing requirements and bottlenecks caused by the limited availability of experts. Also, expert systems don’t get sick, resign, or take early retirement.
Other companies, such as IBM, Xerox, Texas Instruments, and Digital Equipment Corporation, were more conservative in their pronouncements. But the amplified voices of their salesmen, demonstrating various wares [in the 1984 AAAI exhibit hall], sounded at times like carnival barkers, or prophets proclaiming a new age.
The boom continued with Japan’s “Fifth Generation Computer Systems” project. That project in turn helped DARPA justify its “Strategic Computing Initiative.” It also helped to provoke the formation of similar research efforts in Europe (such as the ALVEY Project in the United Kingdom and the European ESPRIT programme) as well as the formation of American industrial consortia for furthering advances in computer hardware. Assessments of some of AI’s difficulties and achievements, compared to some of its promises, led to the end of the boom in the late 1980s – causing what some called an “AI winter.” I’ll be describing all of these topics in subsequent chapters."(272)

(275) Part V - “New-Generation” Projects

(277) 22 - The Japanese Create a Stir

22.1 The Fifth-Generation Computer Systems Project

"In 1982, Japan’s Ministry of International Trade and Industry (MITI) launched a joint government and industry project to develop what they called “Fifth Generation Computer Systems” (FGCS). Its goal was to produce computers that could perform AI-style inferences from large data and knowledge bases and communicate with humans using natural language. As one of the reports about the project put it, “These systems are expected to have advanced capabilities of judgement based on inference and knowledge-base functions, and capabilities of flexible interaction through an intelligent interface function.”"(277)

22.2 Some Impacts of the Japanese Project