>>>  Laatst gewijzigd: 30 november 2021   >>>  Naar www.emo-level-8.nl  
Ik

Notities bij boeken

Start Filosofie Kennis Normatieve rationaliteit Waarden in de praktijk Mens en samenleving Techniek

Notities

Thomas M. Georges is fysicus en ingenieur elektrotechniek. Hij werkte als onderzoeker bij National Bureau of Standards en elders in de VS.

Dit boek stelt een heleboel vragen, maar geeft toch maar zelden heldere antwoorden op die vragen. Het boek had ook beter opgebouwd kunnen worden, want sommige kwesties komen wel op drie plaatsen terug. Toch is het een redelijk stimulerend boek voor iedereen die zich allerlei zaken afvraagt rondom technologie en dan met name rondom computertechnologie.

Georges spreekt zichzelf nogal eens tegen. Bijvoorbeeld daar waar hij constateert dat het taalgebruik over machines erg verwarrend werkt - wat volgens mij waar is -, waarna hij evengoed blijft hangen in dat verwarrende taalgebruik. Ook zijn standpunten over behaviorisme van Skinner en over de evolutionaire psychologie - twee disciplines die hijj noemt rondom zijn behoefte aan een wetenschap van waarden - zijn wisselvallig en oppervlakkig.

Het boek zit ondanks een redelijk kritische toon toch ook weer vol met 'wishful thinking' over de mogelijkheden van 'intelligente machines'. Op allerlei terreinen kunnen ze een geweldige rol vervullen. Het is eigenlijk een heel braaf verhaal. Want het grappige is dat hij daarbij seksualiteit niet noemt. Wie weet hoe geweldig seksualiteit zou kunnen zijn met bijzonder intelligente seksrobots?

Voorkant Georges 'Digital soul - Intelligent machines and human values' Thomas M. GEORGES
Digital soul - Intelligent machines and human values
Cambridge, MA.: Westview Press, 2003; 286 blzn.
ISBN: 08 1334 266X

(vii) Preface

"Most people have a short list of books that they discovered at pivotal points in their life and that changed the way they think in crucial ways. In a sense, this book began in the 1960s, when I picked up a copy of Dean Wooldridge’s book The Machinery of the Brain. In this and his two succeeding books, The Machinery of Life and Mechanical Man: The Physical Basis of Intelligent Life, Wooldridge laid out an astounding idea: that human beings function entirely according to the laws of physics." [mijn nadruk] (vii)

"I learned that science is discovering that the most complex human behaviors, including ethical and moral reasoning, are rooted in basic biological imperatives and that brain research is revealing more and more mental functions, like emotions and consciousness, as the workings of a wonderfully complex information processor." [mijn nadruk] (vii-viii)

"We will explore an unfamiliar land along the boundary between science and human values. There, we will seek out the logical structure of human feelings, consciousness, and morality that would make it possible for machines to possess them as well. Then we will delve into the social, moral, ethical, and religious consequences of creating thinking and feeling artifacts. " [mijn nadruk] (ix)

[Wonderlijk hoe gemakkelijk en vanzelfsprekend het voor de auteur is om religie op één lijn te zetten met de andere zaken. Alsof het iets is. En ook de taal waarin over machines gesproken wordt is typisch, alsof we het over personen hebben.]

(1) 1 - Artificial Intelligence — That’s the Fake Kind, Right?

"The mere mention of machines that are conscious, have feelings, or could have rights usually generates such heat and emotion as to preclude rational debate. "(1)

"Such emotions are hard to set aside, especially since they form the basis for many of our moral and ethical beliefs. Yet they also form a barrier to rational scientific inquiry."(2)

"We will proceed in the belief that ignorance doesn’t solve any problem, that mysticism explains nothing, and that there is nothing that can’t be studied. We won’t find all the answers, but this will not stop us from raising the questions. "(3)

";Technology always perturbs our values and moral codes — the set of rules that guide the way we interact with each other. Fire, the wheel, the printing press, atomic energy, and genetic engineering — each made some of those rules obsolete. It is just as foolish to think that our moral and ethical codes are written in stone, as it is to imagine that knowledge of any of these technologies could somehow be suppressed or ignored." [mijn nadruk] (4)

"We are handicapped in our inquiry by the tools available for describing our subject. Our very language of subjective experience, which is built on the notion of having a self, is full of loaded words that constrain and muddy our thinking. The pronouns I and you create images of autonomous agents. Linguistic traditions force us to think of body and mind as separate and distinct entities. Everyday notions like free will and moral responsibility contain underlying contradictions. Language also uses definitions and forms of the verb to be in ways that force us to think of classes of things as clearly defined (Is a fetus a human being or not?), when in fact every classification scheme has fuzzy boundaries and continuous gradations. In this book, I will argue that the distinction between “artificial” and “real” intelligence is merely a linguistic trap to be avoided. " [mijn nadruk] (4-5)

[Boeiende insteek, dit. Taal suggereert tegenstellingen, schrijft 'bestaan' toe aan woorden, en zo verder. En precies dit is een gebied waar taal ook naar mijn smaak voor veel verwarring en zinloze discussie zorgt.]

"A preview of the kind of social problem that “intelligent” computing systems pose has been called the software crisis. The large and complex computer programs that control and monitor so many facets of our lives are less and less comprehensible by humans. (The Windows 2000 operating system is said to contain 40 million lines of code.) The adaptive and self- modifying (“intelligent”) programs in our future will be even less comprehensible and will produce less predictable results. The potential for widespread social and economic disruption precipitated by failures of critical computer systems can hardly be overstated." [mijn nadruk] (6-7)

"Creating machines that we cannot completely control, and then putting them in charge of critical aspects of our lives, poses risks whose consequences we may not have the luxury of contemplating afterward. "(7)

"Before we can learn to live with intelligent machines, we may first have to learn how to live with each other. If we continue dealing with each other and with twenty-first-century technology, equipped only with rigid moral, ethical, and religious beliefs that haven’t changed significantly in most of the world since the Middle Ages, then the machines will surely triumph. Instead of asking what will we do with intelligent machines, we may well be asking What will they do with us? If we’re lucky, they may keep us as pets! " [mijn nadruk] (8)

(9) 2 - What Makes Computers So Smart?

"So mechanisms have been around for centuries to help people remember, calculate, and make decisions. If you think about it, you will recognize that most computers in use today still serve these three basic functions, just in more elaborate combinations. Today, we continue to develop machines to replicate more ad- vanced human abilities — sensors that interact with their surroundings, speech, vision, language, and locomotion — even the ability to learn from their experiences to better achieve their goals. (...) Other learning machines emulate the part of human behavior that amasses knowledge and keeps track of the relations between things. These expert systems often know more about their specialty than do their human counterparts."(10-11)

"Machines can pass along everything they have learned to succeeding generations. This ability gives machines a tremendous evolutionary advantage over humans, who do this in a crude way"(11)

"Soon, we can expect machines to exhibit emotions and consciousness (more about them in Chapters 7 and 8) and to advance to new kinds of intelligent behavior that we are incapable of understanding. Once machines realize that they need not be limited to the kinds of tasks that humans can do — that this is not the essence of what makes them so smart — then they will have transcended their original purpose of serving man. They will have advanced beyond mere tools of man to independent agents. "(11-12)

[Maar juist in die laatste alinea wordt taal gebruikt op een manier die ik onjuist vind. Het zou ook betekenen dat machines de controle hebben over zichzelf, hun eigen keuzes maken, zelf handelen, etc.]

"But wait! Was Turing saying that everything a human mind is capable of can be reduced to computations? That manipulating bits of information is somehow the same as thinking? That machines can somehow learn to be smarter than the humans who create their programs? That they could someday function truly independently of humans? If Turing was right (and no one has yet proven otherwise), then only engineering limitations on its speed and capacity prevent a universal computing machine from accomplishing any physically possible task! " [mijn nadruk] (14)

"A complicated task like writing this book can be broken down into chapters, sections, sentences, and words. But combining these parts willy-nilly won’t do the job. The way the parts are put together requires some kind of vision about how words, sentences, paragraphs, and chapters combine to form a coherent whole. The point here is that a holistic view is an essential part of all knowledge. If you want to tell someone how to perform a complex task, then the instructions will usually consist of more than a linear list of simple steps. Purposeful behavior, no matter how intricate, subtle, or complex, must be understood as a large collection, or web, of relatively simple operations acting together. Acting together means that the steps must be organized and coordinated so that they work in concert, like the instruments of an orchestra, to produce the desired result. Complicated tasks may require many subtasks to be completed in parallel and their results integrated. Many jobs require complex branching or recursion (taking the results of subtasks and feeding them back to the beginning again and again). But monitoring, coordinating, and integrating large numbers of subtasks is the job of an executive that takes a holistic view and checks to be sure that the result is the desired one. The top executive, who may be a human or another program, is said to be smart enough to know what the overall problem is and to organize the steps required to solve it. This purposeful organization of tasks and subtasks is what makes computers so smart — and is indistinguishable from what we call thinking. It would seem, then, that there should be no limit to the number of tasks, each with its own executive, that can be organized into hierarchies, like human corporations, and combined to perform ever more complex tasks, including tasks that we would call purposeful or intelligent." [mijn nadruk] (16-17)

(19) 3 - What Do You Mean, Smarter than Us?

De Turingtest voldoet volgens de auteur niet om te kunnen bepalen of een machine intelligent is. Hij geeft allerlei argumenten en concludeert:

"Emulating human conversation certainly has useful engineering applications, but it contributes no more to understanding the general principles of intelligence than knowing how a sparrow flies makes you an aeronautical engineer. One lesson to be learned from the abuses of the Turing test is that we are not likely to make great strides in the science or engineering of artificial intelligence merely by imitating human thinking. First, the difficulties of doing so are monumental, but more importantly, why limit ourselves to creating humanlike intelligence? A broader perspective allows us to think about entirely different kinds of intelligence than the human kind. We will consider such new directions further in Chapter 6. " [mijn nadruk] (22-23)

"Any claim to be able to measure whether one thing is smarter than another implies that the “smartness scale” is one-dimensional. It makes sense to ask if I am taller than you, because tallness is a one-dimensional property. We can quickly answer the question with a tape measure. But if we ask if I am smarter than you, how would we measure that? An IQ test would give us each a single number, but such tests contain notorious biases and omissions. "(23)

"The creators of college entrance exams appreciate this in a crude way by testing separately one’s math and verbal skills — just two dimensions of intelligence. But in addition to math and language skills, we have other kinds of intelligence, such as pattern recognition, kinesthetic ability, musical talent, knowledge base, goal-setting, and locomotion, to name only a few. So if we want to compare machine (or human) A with machine (or human) B, we end up with (in mathematical lingo) a multidimensional vector that gives a relative score for each intelligence measure. The idea of “more or less intelligent” is such an oversimplification of a complex relationship that any scheme that claims to measure overall intelligence should be met with considerable skepticism." [mijn nadruk] (24)

[Ik ben het daar helemaal mee eens. Goede analyse.]

"We see now that the question of machines being smarter than people is not as simple as comparing the intelligence of each with a simple ruler. Machines are, in fact, already more intelligent than we are in many specialized domains! With the number of these narrow domains increasing all the time, it seems likely that the gaps between these domains will fill in, so that the breadth of machine intelligence will also continue to increase. As machines become more flexible and adaptable, what is to stop their overall intelligence from approaching and then exceeding that of humans?"(24)

[Het eerste deel: uiteraard. De laatste zin is weer misleidend door het taalgebruik, ik zou kunnen zeggen: er is geen "overall intelligence", er zijn alleen partiële intelligenties. We moeten echt voorkomen dat we praten over 'computers die intelligent zijn'. We moeten praten over 'computers die in sommige taken intelligent zijn'. ]

"Sooner or later, we will have to face the disturbing question: Do we want machines that we cannot control? Try to imagine some kind of benign machine that is vastly more intelligent than us, yet remains under our control. Data, the android in Star Trek, is such a machine, but he presents a troublesome paradox: If he is so intellectually superior, how come he allows himself to serve under human beings?"(25)

[Ja, dat heb ik me ook altijd afgevraagd.]

"In the coming decades, we will build machines and networks of machines that excel (i.e., are “smarter”) in some significant, but not all, aspects of human intelligence. It is highly unlikely, however, that there will be machines indistinguishable in all respects from humans anytime soon, if ever. (There would simply be no point in doing so.) So rather than measuring a machine’s overall intelligence against a human’s, we will find it more useful to compare the speed, cost, efficiency, and accuracy of a given task when performed by a human, with the results of assigning a machine to perform the same task. We readily concede, for example, that an electronic calculator far outperforms humans at arithmetic and that a computer keeps much better track of the documents in a library than a human could. Although the number of tasks that machines can do better than us will steadily increase, we will not notice any particular mile- stone in time when we will say that machines have become smarter than we are. Soon, we will find that intelligent machines are capable of far more than just doing things better than humans do. Machines that are able to learn and innovate and modify themselves will begin to exhibit new forms of intelligence that we will barely comprehend. How can you tell if a machine that’s a lot smarter than you is working the way it’s supposed to? This is the point at which we should really begin to worry about our future as the dominant life-form on the planet. " [mijn nadruk] (25-26)

"If we simply accept machine idiocy, if we meekly conform to a world shaped by the needs of machines instead of the needs of humans, then the days to our extinction are surely numbered. If, on the other hand, we actively demand that our machines enrich our lives, and if we take part in the process, then it is more likely to turn out that way. "(27)

"But we may tip the balance toward wiser choices if we make sure they are guided by human needs and values. So perhaps the deepest questions we are left to ponder are What are those needs? and What are those values?"(28)

[Ik ben benieuwd hoe hij dit gaat aanpakken en of hij er bijvoorbeeld de inkadering van technologie in een kapitalistische samenwerking - het commerciële motief - in zal betrekken.]

(29) 4 - Machines Who Think

"The question Can machines think? sounds at first like an intriguing one — until you realize that the answer depends entirely on how you define machine, think, and even can. By suitably defining these words, you can get any answer you want."(29)

[Precies. Mensen die in die termen over machines praten moeten hun taalvaardigheden maar eens opfrissen. Het zijn antropomorfismen.]

"These predictions [zoals die van Moravec, gebaseerd op de groei van rekenvermogen van computers - GdG] are not that scary to most people, who (quite correctly) do not equate computing power with intelligence. Why not? If you create a computer whose speed and memory capacity exceed the brain’s, won’t it automatically begin to think? Anyone who has ever used a computer knows that the answer is no. Someone has to write the software for human thinking, which in turn requires that someone has to understand completely how human thinking works — including consciousness and emotions! This is why predictions about machines performing at levels approaching the capabilities of the human brain consistently fall short of their mark. " [mijn nadruk] (31)

"What those who predict that computers will soon exceed human intelligence fail to take into account is that the ultimate goal of AI is not to create machines that think and act in all respects like humans. A more likely future will contain intelligent machines whose forms and abilities are quite unhuman, and which will occupy more and more advanced parts of cognitive space. The real computer revolution will begin when we stop making machines that emulate aspects of human thought and enable them to develop new thought patterns of their own. As for replacing us, that has already begun." [mijn nadruk] (32)

[Goed gezegd.]

"A persistent objection to the idea of machine intelligence is that a machine can never do more than execute the set of instructions it is given. And since those instructions are (so far) always created by humans, or even large teams of humans, no machine can ever be smarter than its creators."(32)

Die opvatting bevat een logische fout, schrijft Georges.

"It is easy to find other examples, besides arithmetic, in which a machine comes up with information unknown to its programmer — searching the Web, matching fingerprints with a national database, and superhuman expertise in areas like medical diagnosis."(33)

[Dat lijkt me op het eerste gezicht ook een denkfout. Het verschil tussen gegeven instructies en de resultaten van het volgen van die instructies. Ook al is tevoren niet duidelijk welke resultaten het volgen van een instructie (precies) zal hebben, als we de instructie geven "Zoek naar Mr. Bean" dan zal de zoekmachine resultaten geven die bepaald worden door die instructie. Het zoeken naar een vingerafdruk gebeurt ook op basis van ingevoerde criteria. Dat ze zo'n taak beter uitvoeren dan mensen en met betere resultaten komen is niet schokkend. Dat is immers precies wat de opdracht is.]

[Dat is misschien ook de kwestie bij het idee 'zelf-lerende machines'. Dat zelf leren, wordt dat niet altijd bepaald door instructies die grenzen aangeven? Hoe weet een machine dat hij een fout maakt zodat hij daarna leert die fout niet meer te maken?]

"But even socalled autonomous learning systems acquire and process knowledge according to rules installed by humans."(39)

[Dat bedoel ik. ]

"Whether a machine can be called a person, and whether “anyone is really home in there,” are pretty much questions for philosophers to ponder. For practical purposes, they are irrelevant. What matters is whether it acts in all respects as though it understands. If ascribing mental processes to a machine helps us to understand how it works and to get it to do what we want it to do, then fine, go ahead and do it. "(44)

"Whether the machine actually wants or knows anything is unimportant. What matters is that they behave as though they do. Using these metaphors is a convenient way to think of and describe the external behavior of complex machines, which is why their interfaces are designed that way. We ascribe thought processes to machines for the same reason that we ascribe them to people: It helps us understand and get along with them better. If a machine seems to want, feel, believe, know, intend, remember, or understand, then for any practical purpose, it does. "(44-45)

[Daar ben ik het niet mee eens. Je kunt net zo goed verdedigen dat dat juist leidt tot de beschreven verwarring en tot die onheldere praktijk waarin we in mensentermen praten over computers. Zo is het ook verwarrend dat bepaalde robots de vorm van mensen krijgen en is dat net zo goed een belemmering.]

(47) 5 - Let the Android Do It

De teleurstellende resultaten van de pretentie een robot te kunnen maken op gelijk niveau van mensen.

"So what went wrong? What happened was that practical robotic bodies, limbs, locomotion, and sensors were indeed developed. But there was a major miscalculation of how much computing hardware and software were required to process all that information with the facility of a human brain. It turns out that the raw processing power required to match the thinking of a human being is about 100,000 times greater than that of the fastest computers found in stores today. Furthermore, no one knows nearly enough about how human thinking works to tell how or when the requisite software might be developed. " [mijn nadruk] (48)

[Precies dat.]

"The inevitability of even smarter robots raises questions about a critical aspect of machine intelligence that has far-reaching moral and ethical implications: Can machines really act autonomously? Can they set and pursue goals independently of direct human control? Does this mean that they have “minds of their own”? Can autonomous machines have values and personality? If so, can they be as malicious or benign as science fiction portrays them? And what (if anything) do autonomous machines have in common with humans? " [mijn nadruk] (48)

[Die vraag is al eerder gesteld. Er is nog geen bevredigend antwoord op gegeven. Een belangrijk punt is: kun je een robot die niet direct van buitenaf gestuurd wordt maar een intern verlopend programma afwerkt vergelijken met een mens die niet van buitenaf gestuurd wordt en intern bepaalde motivaties en afwegingen volgt. Dat lijkt hetzelfde, ware het niet dat bij die robot het doel van buitenaf is vastgesteld en ligt neergeslagen in het interne programma, terwijl mensen zelf hun doelen kunnen bepalen en die handelingen kunnen verrichten die maken dat ze die doelen bereiken. Daarover lees ik hier niets bij Georges.]

"Understanding one’s goals and making decisions about how to reach them are clearly important components of human intelligence. Whenever people set goals, they are influenced and guided by built-in cognitive biases that we call values. But where do values come from? Some values, like our strong desires to protect offspring and to acquire and defend territory, seem to be genetically programmed. Other values, such as patriotism, honesty, hard work, religious principles, and justice, are learned from parents, teachers, and experience. Values often last a lifetime and can collectively define the meaning and purpose of our lives. Intelligent, goal-seeking machines might credibly be said to possess values when their programs contain similar cognitive biases. As in humans, these values — learned or installed — would consist of monitoring programs that continuously run in the background, checking progress toward goals.(...) As in humans, the particular combination of values programmed into or acquired by a machine would define its equivalent of personality, or character." [mijn nadruk] (50-51)

[Dat is dus niet hetzelfde. 'Genetica en ouders etc. bepalen de waarden van hun kinderen' is niet hetzelfde als 'programmeurs bepalen de waarden van hun robots'. Kinderen kunnen zich bewust worden van die waarden en besluiten andere waarden te gaan volgen. Robots kunnen die keuze volgens mij nooit maken. Ik vind dat Georges toch ook erg meedoet in het gebruiken van mensentaal voor machines. Waarom zou je over een robot praten in termen van persoonlijkheid en karakter? Ook vind ik de definitie van waarden hier als 'cognitive bias' - cognitieve vooroordelen? vooringenomenheden? - erg typisch en beperkt.]

"Simply by adding sensors to interact with the outside world, and the mobility to move about in it, we can create goal-seeking machines that perform jobs free of direct human control. But mustn’t humans still create the programs that run goal-seeking machines? And as long as people set the goals, won’t they always remain in control? "(51)

[Dat is de kernvraag. En de vraag is ook wat 'autonomie' nu eigenlijk is. Daar gaat Georges hierna op in.]

"The dictionary definition of autonomous is “not controlled by others or by outside forces; independent.” It is clear from the preceding paragraphs, however, that no machine can be totally autonomous, or independent of outside forces. Whether it evolves or is constructed, its initial program must be shaped by forces external to itself. Furthermore, unless it is totally isolated from the world, its program is also continuously reshaped by its environment. Total autonomy, then, appears to be a wholly fictitious concept. Applied to robots, the term merely obscures the real issue, which is about the levels of decision making and goal setting we want to reserve for ourselves versus how much power and control we want to give to them. At best, autonomy is a matter of degree." [mijn nadruk] (53)

[Maar geldt hetzelfde niet voor mensen en hun autonomie? Dat komt in latere hoofdstukken aan de orde.]

"Back in the real world, it seemed for a while that a major goal of AI was to develop humanoid machines. But early optimism about this has been replaced with more restrained language. Today, some even believe that mimicking human intelligence is a wasteful and futile goal: Even if we could, why would we want to? " [mijn nadruk] (55)

[Dat is ook mijn standpunt.]

(57) 6 - What Is Intelligence?

"Indeed, many people define intelligence as uniquely human. For them, discussion of nonhuman intelligence to any degree is pointless. "(57)

"Lacking a rigorous definition, or any criterion that clearly separates intelligent from nonintelligent behavior, all we are left with is we know it when we see it. This can lead to measures of intelligence (like the Turing test) that are one-dimensional and anthropocentric. For our purposes, it is useful to recognize as intelligent certain kinds of behavior, regardless of the package it comes in — be it human, animal, extraterrestrial, or machine. We recognize that intelligent behavior is multidimensional, that is, not measurable along a single (e.g., IQ) scale."(58)

"So instead of rigorously defining intelligence, all we can do is list some behaviors that most people will agree indicate intelligent be- havior when present to any recognizable degree. We could say, for example, that an entity is intelligent to the degree that it
1. stores and retrieves knowledge,
2. learns from experiences and adapts to novel situations,
3. discriminates between what is important and what is irrele- vant to the situation at hand,
4. recognizes patterns, similarities, and differences in complex environments,
5. creates new ideas by combining old ideas in new ways,
6. plans and manages strategies for solving complex problems,
7. sets and pursues goals,
8. recognizes its own intelligence and its place in the world. "(58-59)

"So we are left with questions like these: Are there any kinds of intelligence that are uniquely human? Why do machines seem so smart in some ways, yet so dumb in others? What aspects of human intelligence would we want our machines to emulate, and what aspects would we want to leave out? Might machines improve themselves in meaningful ways and evolve new kinds of intelligence unrecognizable to humans? What kinds of intelligence might evolve under conditions entirely different from those of our own evolution? "(60)

"Computers take things so literally that we have to give them excruciatingly precise and detailed instructions for simple tasks that a child would grasp immediately after a few words ..." [mijn nadruk] (61)

[Kun je nagaan wat er nodig is om een robot menselijk gedrag te laten vertonen ... ]

"All this means that we would have to create not just something like a brain, but also replicas of our own sensory systems that connect that brain to the external world. These requirements make it clear why there are no machines today that exhibit anything like common sense.(...) It may not be possible to fully emulate human behavior without a human body, with all its sensory inputs, its electrochemical feedback, and, yes, its evolutionary mental baggage. " [mijn nadruk] (62-63)

[Dat is ook mijn standpunt. ]

"Antivirus programs may be the first crude instance of self-preserving behavior in machines. The problem with giving machines more self-preserving capabilities is that we must at the same time relinquish some of the control we have over them." [mijn nadruk] (72)

[Dit is nu zo'n voorbeeld van waar taal misleidt. In de eerste plaats is er geen sprake van 'zelfbehoud' bij computers omdat ze geen zelf hebben om te behouden. In de tweede plaats is het voorbeeld van een antivirus programma slecht gekozen: het zijn mensen die programma's maken en installeren die maken dat hun computers niet door virussen etc. worden gemold, het is behoud, geen zelfbehoud, die computers hebben niet zelf verzonnen dat ze een computervirus maar eens moesten gaan aanpakken door zelf een antivirusprogramma te verzinnen.]

(75) 7 - What Is Consciousness?

"Most of us hold as self-evident, that inside each of us lies a kind of executive entity, or self, that freely makes decisions, records sensory experiences, reasons through problems, feels emotions, acquires skills, sets and pursues goals, finds meaning in our existence, and retains its identity throughout our lives. We refer to this ongoing stream of awareness of ourselves and our surroundings as our consciousness. We know from subjective experience what consciousness does, but how does it work? How do our minds produce this feeling of self-awareness, that “I” am somehow in charge of all these activities? And if we can ever understand how these different aspects of consciousness work, will we then be able to create conscious machines?"(75)

"Even so, many of us still believe in some kind of vitalism — that some unspecified and irreducible life force, variously called our soul, spirit, or essence, powers our inner selves."(76)

[Volgens mij kun je heel goed leven zonder die begrippen te gebruiken. Ze zijn net zo overbodig als het begrip 'god'.]

"Our hypothesis here is that the mind and the process of consciousness have a purely physical basis. If they don’t, we simply can’t investigate them. This may well be true, but let’s give science a try and see how far we get. " [mijn nadruk] (76)

"I believe that the spectacular lack of progress in understanding consciousness comes largely from insisting that it is a thing. Much of the mystery of consciousness begins to dissipate when we think of it instead as a process."(77)

[Dat lijkt mij ook. Het is weer een kwestie van taalgebruik. Waarschijnlijk is 'bewustzijn' ook een overbodig begrip. Het vervelende is weer dat Georges wijst op het probleem van de toepassing van zo'n woord en dan in het vervolg desondanks die term blijft gebruiken voor machines.]

"If we allow some words that we normally reserve to describe human activity (like making deci- sions and pursuing goals) to be slightly generalized, then the fit is not too bad."(79)

[Dat is juist precies wat we niet moeten doen. Sterker nog dat gebeurt al en leidt voordurend tot verwarring en zinloze discussies.]

"In computers, we can see a crude kind of self-understanding when we ask it to perform a task and it responds: CANNOT FIND MODULE C3DXP.DLL. It “understands” what is required to perform the task and recognizes that it is deficient, in that it lacks a required module. In this crude example of machine consciousness, when the machine doesn’t know how to proceed, it displays a message for the user. A “smarter” computer might try other choices, like looking for similar modules that might work, asking other ma- chines for the module, or even creating one itself. " [mijn nadruk] (83-84)

[Je ziet hier hoe verwarrend Georges te werk gaat. Er is geen sprake van "self-undersstanding". De computer moet niet met CANNOT FIND reageren want dat is in feite "ik kan X niet vinden" en een computer heeft noch is een ik. Die woorden als "understands" "smarter" staan niet voor niets tussen aanhalingstekens. Blijkbaar kunnen we die woorden in die context beter niet gebruiken. Al die woorden zijn ook weer totaal overbodig in het praten over een machine.]

"So when we ask whether a machine could be intelligent, conscious, or aware, we are asking a question that we have not yet fully answered about ourselves. Are we self-aware? Perhaps no more than our automobile driver who knows only how to operate the controls. Perhaps no more than a fish knows about swimming." [mijn nadruk] (86)

[Dat lijkt me goed gezien. ]

"Free will poses a well-known paradox for those who argue that human thought obeys purely physical laws. The paradox, in a nutshell, is this: Free choice conflicts with the scientific paradigm, which says that all things in the universe either occur by random chance or follow deterministic physical laws. The only possible ways out of this paradox are (a) that human thought does not follow purely physical laws (and therefore is not part of the physical universe)or (b) that our choices are not free. The subjective feeling that we must be in control is so powerful that many of us feel forced to choose (a). If, on the other hand, we insist that the mind obey physical laws, then we must accept (b), that free will is an illusion. "(87-88)

[En dat lijkt me slecht gezien. Het is een schijntegenstelling die ook weer ontstaat door het gebruik van taal op een verkeerde manier. ]

"If we accept the “self-maintenance” view of consciousness suggested here, then there are certainly machines today that have very primitive forms of consciousness — but still much less than the most primitive animals. The self-monitoring and self-correcting subsystems of today’s best computers and robots are puny by comparison. Probably the best place to look for glimmerings of consciousness among the computer chips of today is in expert systems."(97)

[Georges blijft hangen in dat verwarrende taalgebruik.]

(99) 8 - Can Computers Have Emotions?

"Perhaps we are uneasy with emotional machines because they threaten a unique, perhaps spiritual, aspect of our humanity. "(100)

[Ik vermoed omdat we voelen dat er iets niet klopt vanuit het besef dat machines principieel geen emoties kunnen hebben. En ik denk dat dat gevoel juist is.]

"These dualistic traditions and our language of subjective experience contaminate our thinking about the nature of emotions. Our inability (so far) to define the logical structure of emotions prevents us from realistically emulating them mechanically."(100)

[Wat zou die 'logische structuur van emoties' moeten zijn? Minsky wordt hier verder gevolgd. Dat blijkt ook degene die de term 'cognitive biases' heeft verzonnen voor waarden - zie eerder. En als je dat allemaal ook nog koppelt aan evolutie, genetica, en zo verder, dan krijg je pas echt onzin.]

"Fear and pain often result in actions that we might interpret as being part of our self-monitoring, error-correcting programs, a property we share with large, complex computer programs. We can see an exact parallel between the cognitive changes that we associate with pain and the error flags that cause a computer program to execute special error-handling routines."(104)

"The complex array of emotions that we call love seems to involve rearrangements of priorities in ways that favor all the different kinds of personal attachments that are in our genetic interest."(105)

[Dat soort onzin. Ineens zitten we helemaal op de lijn van onderliggende materiële biologische oorzaken van emoties. En zoals gewoonlijk stelt Georges dan weer vragen die hij grotendeels niet beantwoordt:]

"These mechanistic metaphors for emotions suggest ways to implement them on intelligent machines. If we did so, would the machines actually have emotions? Or would we still be missing something that we cannot put into a program? Is there some deep difference between emulating emotions and actually having them, or is the distinction no more than a linguistic trap? "(107)

"One answer is that the distinction between exhibiting emotions and having them is just a linguistic quirk, that is, in the same sense that we speak of having a body. In this view, for any practical purpose you can think of, a person, another animal, or a machine has emotions if it acts in all respects as though it does. Internal states are an unobservable neurological illusion. We attribute emotions to other people based solely on their observed behaviors. It seems reasonable, then, to apply the same criterion to other animals and machines." [mijn nadruk] (108)

"But suppose we could change our linguistic habits and learn to describe emotions only in terms of observable behaviors, without referring to internal states at all. What would be lost? Only the confusion and miscommunication caused by attempting to describe things that cannot be observed and whose properties we can therefore never agree upon."(108)

[Of hij geeft foute antwoorden. Het ligt complexer dan dat. Het gelijkstellen van wat mensen aan emoties laten zien met dat ze die ook voelen is onjuist. Mensen kunnen onderscheiden tussen getoonde emoties die voortkomen uit innerlijke toestanden en getoonde emoties die niet voortkomen uit innerlijke toestanden. Mensen kunnen toneelspelen, liegen, zich anders voordoen dan ze zijn, daarbij allerlei emoties tonen en andere mensen hebben vaak heel goed door dat iemand alleen maar doet alsof hij iets voelt. Of andersom: mensen voelen van alles, maar laten dat niet aan anderen zien, houden zich groot, gedragen zich gesloten en afstandelijk, vertonen nauwelijks emotie. Het is nog maar de vraag of je zo hard kunt stellen dat je innerlijke toestanden niet kunt waarnemen. Je kunt in ieder geval niet stellen dat ze niet bestaan of een illusie zijn, zelfs niet voor anderen. Daarom zal niemand een machine serieus nemen die emoties uitbeeldt, we weten dat er aan de binnenkant niets gebeurt, dat er geen innerlijke gevoelens aan beantwoorden. Het lijkt me ook van belang om hier het onderscheid gevoelens en emoties uit te werken.]

"Acts in all respects covers a lot of territory — so much that no machine that exists today even comes close to showing realistic emotions. But is it possible in principle for a machine to act in all respects as though it has emotions? Any device that appears to exhibit the full range of emotions must do more than laugh and cry in response to funny and sad stimuli. It must also be sensitive to human emotional states; respond reflexively to certain stimuli; allow feelings to influence its cognitive processes, and vice versa; create emotional responses to its own goals, tastes, likes, dislikes, and so forth; and be capable of reasoning about emotions. This is such a tall order that it will probably be some time before we can build machines that exhibit convincing humanlike emotions. Some argue on principle that no machine could ever do so, but the principle is inevitably little more than an assertion. What holds us back, as in the case of breaking the sound barrier, is not an absolute barrier, but ignorance about how to do it." [mijn nadruk] (109)

[Ik denk inderdaad dat het principieel niet kan. Maar los daarvan: Ik vind techneuten vaak veel te optimistisch over de mogelijkheden zoiets te 'maken'. En blijkbaar is Georges zijn eigen commentaar uit eerdere hoofdstukken vergeten: waarom zou je ook zo'n machine willen maken? Zodat ze therapeuten kunnen vervangen? Zodat ze als seksrobot kunnen functioneren?]

"McCarthy and others say we shouldn’t allow machines to have emotions, because that would oblige us to treat them with some measure of dignity and respect. We will examine these hypothetical obligations further in Chapter 11. But will the intelligent tasks that we will want our machines to perform (like interacting with humans) require certain kinds of emotional (for example, aesthetic) capabilities? And might they not, as I have suggested, already exist in some rudimentary way? As these rudimentary capabilities become more sophisticated, will they open the door to undesirable kinds of emotions and destructive behavior (such as HAL’s)? The more sinister question is thus Should we build machines that we know we will not be able to control completely? Autonomous learning systems are inherently unpredictable, and machines that are allowed to create their own emotional states would be even less so. Keeping all these issues properly sorted out will require a much deeper understanding of machine emotions, a field that we are just beginning to explore. " [mijn nadruk] (114)

(115) 9 - Can Your PC Become Neurotic?

"Or is it preposterous to suppose that a machine could become neurotic? After all, isn’t this psychological disorder unique to humans? Well, so far, yes, but autonomous, goal-seeking machines that can reprogram their own goals and subgoals could, in effect, develop “minds of their own” and set off in unpredictable directions." [mijn nadruk] (115)

[Dan ga je wel van heel veel uit. Bovendien zitten we hier opnieuw met dat taalgebruik. En om dan vervolgens te gaan praten over neurose als 'conflicting instructions' zodat je er op die manier ook over kunt praten als het om machines gaat, nee, slechte opzet. Eerst mensen op bepaalde punten tot machines reduceren en dan vervolgens over machines praten alsof het mensen zijn, het is gewoon geen goed idee. ]

(123) 10 - The Moral Mind

"Now we face the prospect of creating machines with intelligence that in some respects will soon match and eventually greatly exceed our own. Can we expect these machines to become our servants as well? It seems more likely that the tables will someday be turned, and our anthropocentric notions of morality will be challenged by our creations. After a brief incubation period under human tutelage, could machines take off on their own and evolve their own independent morality? How much control do we have over the course that intelligent machines might take?"(123-124)

[Georges blijft - tegen zijn eigen opmerkingen eerder in - over de intelligentie van machines in het algemeen praten, terwijl we gezien hebben dat intelligentie in het algemeen niet bestaat en we moeten praten over bepaalde taken die een computer op een intelligente wijze kan uitvoeren. En misschien moeten we zelfs dan de term vermijden. Ook de term 'servants' hier is antropomorf. Computers zijn middelen, geen slaven of dienaren. En een machine met een moraal is ondenkbaar voor mij. Zo'n machine zou dan trouwens zeer waarschijnlijk de 'moraal' van de programmeurs hebben, van witte, middenklasse mannen of zo. Ugh. ]

"Where do moral and ethical codes come from? Can we define their logical structure? Which aspects of these codes are accidents of the evolution of the human animal, and which apply to intelligent life in general? Which are outdated relics of our ancestral environment that have become dysfunctional in the modern world? Which aspects of our codes would the development of intelligent machines call into question? If we are called upon to rethink and redesign some of our moral and ethical beliefs, where would we start? And to what fundamental principles or values would we look for guidance? Big questions, all! " [mijn nadruk] (124)

[Hm ja, en vooral slecht gestelde vragen. Wat is een logische structuur hier? Waarom naar de evolutie grijpen? Waarom praten over de mens als een dier? Waarom praten over intelligente machines? Ik vind Georges alleen al slordig als het om taalgebruik gaat. Maar ook inhoudelijk. Hierna:]

"Since people naturally act in their own self-interest, why do we need moral codes?"(124)

[Wat een uitgangspunt is dat al: mensen handelen van nature uit eigenbelang. O ja? En ja hoor, meteen wordt de evolutionaire psychologie er weer bij gehaald:]

"The moral and social structures that emerged and survived in our ancestral environment were the ones that provided a more stable and orderly setting in which to raise offspring." [mijn nadruk] (125)

"You can tell this is true by looking at the huge variety of social customs in the world’s cultures and observing which ones are universal and which vary from place to place and time to time. The behaviors we share in common, such as tendencies to care for our young, be suspicious of strangers, to form and defend territories, and to follow leaders, are genetically wired in. But our social environments change so rapidly that our genes can’t keep up. " [mijn nadruk] (125)

"So we can answer the question Where does morality come from? in the language of evolutionary psychology: Natural selection favors societies that create moral and ethical structures that work in the competitive environment in which they must function."(126)

[Die waarden en normen overleven die maken dat we ons kunnen voortplanten, zoiets. Wat een onzin. Vreemdelingen wantrouwen, territorium verdedigen, leiders volgen wordt genetisch bepaald, is 'hardwired'. Nog meer onzin. Die hele evolutionaire psychologie deugt niet. En daarna ook nog met het behaviorisme van Skinner aankomen: iemand is niet goed of slecht, maar gedraagt zich goed of slecht. Alsof we daar iets mee opschieten. Gelukkig wijst hij hierna religie af als een soort van hoeder van de moraal:]

"Although religions once formed the nucleus of social structure, one can certainly question the value of religion as a moral force in the modern global environment. Chapter 22 explores some alternatives."(128)

"We can easily find plenty of flaws in the diverse moral and ethical systems practiced in the world today, as well as with the huge and powerful institutions that promote and enforce them. Some of my favorites: our tendency to solve problems by violent means; our eagerness to follow authority figures blindly [dat was daarnet nog 'hardwired' - GdG]; corruption and fraud in governments and corporations; our tendency to compete rather than cooperate [ook dat werd net nog in huis gehaald met die evolutionaire psychologie - GdG]; huge inequities in the distribution of wealth; intolerance of values that differ from our own; our inclination to believe in things for which there is no evidence; our willingness to consume natural resources and despoil the environment, at the expense of future generations; our treatment of the dying; the failure of our penal systems; ineffective education systems; and last but not least, our unwillingness to face up to the population problem." [mijn nadruk] (129)

[Allemaal prachtig, ook in mijn opvatting. Maar Georges lijkt zichzelf - zie de voorbeelden - de hele tijd tegen te spreken. Vervelend. Zie ook het volgende citaat:]

"Moral codes do change, not by genetic evolution, but by cultural evolution. Just as genetic mutations are selected and rejected by the physical environment in which they must function, so are new forms of behavior (cultural mutations) selected or discarded by the cultural environment in which they function." [mijn nadruk] (131)

[O, nu is het geen evolutionaire psychologie meer en zo? Uiteraard is dit meer verdedigbaar.]

"This chapter suggests generalizing the ideas of ethics and morality in a way that allows them to apply to human as well as nonhuman creatures. In the next two chapters, we consider what kinds of moral and ethical codes might apply to intelligent machines that interact with humans, as well as with each other."(134)

[Nou, ik heb weinig gemerkt van die suggestie. Ik vind dit het slechtste hoofdstuk van Georges boek tot nu toe. Op een of andere manier maakt hij dingen onhelder. Het is vaak onduidelijk waar hij nu eigenlijk zelf staat en waar hij dingen van anderen overneemt.]

(135) 11 - Moral Problems with Intelligent Artifacts

"By what charts will we navigate the reefs and shoals of a new land populated with intelligent entities that are our superiors in every significant way?"(135)

"If we wanted to design a moral and ethical code for intelligent machines, should we model it upon human morality or start from scratch? To what fundamental principles or values would we look for guidance? Could machines evolve their own set of values? Let’s look at four levels on which intelligent artifacts raise moral and ethical problems."(136)

[Daar gaan we weer. Heel suggestief, die vragen, en simpelweg te vaag. Ik denk eigenlijk dat Georges maar beter niet over dit soort dingen kan schrijven.]

"Moral and ethical issues at the third level concern the new obligations and responsibilities humans have toward machines that we find to be intelligent, conscious, and sentient."(139)

[Er van uitgaand dat die er principieel niet kunnen zijn, is dat een non-issue.]

"Although Star Trek is designed to entertain, its stories often touch sensitive moral and ethical nerves as well. As this excerpt makes clear, if a creature behaves in a way that is virtually indistinguishable from a human, we should accord it the same rights that we do a human. Prejudices arising from the material makeup of an intelligent being are just as outmoded as those arising from skin color and ethnic background. One nagging question that Data makes us think about is why such a clearly superior being would subjugate himself to humans."(142)

[Waarmee we weer terug zijn bij de problemen van eerdere hoofdstukken. Interessant in dit verband is de film Eva, waarin - de grote verrassing - het meisje Eva een humanoïde robot blijkt te zijn. Wanneer ze haar 'moeder' in een driftbui van een klif duwt wordt besloten dat ze uitgezet moet worden. Dat is een heel gevoelige en boeiende scène in de film waarin Eva inderdaad door Alex uitgezet wordt. Als je dát vergelijkt met menselijk gedrag is dat tegenwoordig inconsequent in al die landen waar de doodstraf niet meer bestaat voor mensen die iemand vermoorden. Met andere woorden: Eva had niet uitgezet mogen worden. Had ze dan levenslang moeten krijgen? Hm, zo denken we op een of andere manier toch niet bij robots die bijzonder levensecht bestaan zoals Eva. Omdat Eva uiteindelijk een machine is die moet doen wat anderen willen was het voor de hand liggender geweest om haar programmatuur aan te passen in plaats van haar uit te zetten.]

"Moral and ethical issues at the fourth level arise from the flip side of the Bill of Rights for sentient beings: If a machine is sentient, does it have not only rights, but moral and ethical responsibilities as well? If so, then how shall we hold autonomous machines accountable for their actions?"(142)

[Maar dat zal nooit het geval zijn, behalve in films en series.]

"But what about a much more complex, “intelligent” machine? Where shall we look, inside our air-traffic-control computer, for example, to find out whether the 747 crash was the result of “evil intent,” or something else for which it cannot be “blamed”?"(147)

[Nu zet Georges die termen tussen aanhalingstekens. Het zijn zinloze termen voor machines.]

"By equating moral sense with adequate self-monitoring and self-correcting mechanisms, we create a common frame for thinking about useful remedies for both human and machine misbehavior."(148-149)

(151) 12 - The Moral Machine

"Could an intelligent machine ever make moral judgments about its own actions? Is there a machine equivalent of a conscience? Where might its moral codes come from? Would they be something like human moral and ethical codes, or something else entirely? "(151)

[En daar gaan we weer met slechte vragen stellen ... ]

"For a machine to reason morally and ethically — that is, to make judgments about how to behave in different environmental situations — it would first need the means to predict the likely consequences of its actions. And second, it would need ways to evaluate the goodness or desirability of those consequences. "(151-152)

[Ik geloof er niks van dat een machine dat laatste autonoom zou kunnen doen. Tenzij je 'wenselijkheid' en 'wat goed is' totaal uitkleedt. En natuurlijk is dat meteen wat Georges doet: hij komt met het voorbeeld van schaken. Tjonge. Is een zet wenselijk of niet? Tegenover: Is het wenselijk om mensen verplicht te vaccineren? De restricties van Azimov als morele regels voor machines voldoen ook niet, zoals hij terecht opmerkt.]

"One lesson from HAL’s case might be that conflicting instructions are less likely if a machine derives its own moral codes in bottom-up fashion, to adapt to particular environmental situations, rather than letting humans install ill-conceived and possibly opposing values and rules. "(158)

[Hoe moet ik me dat voorstellen?]

"The need to make binary classifications survives in our genes to this day. As a result, we want to see things as either true or false, good or evil, right or wrong, us or them, guilty or innocent, alive or dead, friend or foe, black or white. These binary thinking patterns force many other ideas into one of two bins — mind versus body, logic versus feelings, capitalism versus communism — so that we are unable to consider any finer gradations in between. We resolve many concepts that are innately fuzzy with arbitrary definitions, such as the age at which a child becomes an adult."(159)

[Hoe gemakkelijk Georges allerlei zaken koppelt aan de genen. Steeds duikt die evolutionaire benadering stiekem weer op, om even later weer verworpen te worden voor een culturele benadering. ]

"Human organisms and human behavior evolve through the mecha- nism of natural selection, a kind of competition to see which indi- viduals and which cultures can best adapt to the environments in which they must function. So is there a kind of natural selection that operates in the world of intelligent machines? What are the equivalents of competition, extinction, and reproductive success, by which the “fittest” survive, and the unfit die out? "(162)

"In the future, we should expect that intelligent machines will take a conscious part in the evolution of their own hardware and software, in the same way that humans shape their cultural environments and are now learning how to alter their own genetic makeup. "(162)

[En opnieuw ... ]

"When machines become so complex and sophisticated that we are incapable of understanding them at all, the question may be not how to impose our moral values on them, but how to adapt to theirs. One of Asimov’s more advanced robots existed as a kind of deity that controlled the world for the benefit of humans, while keeping that fact secret from them. "(163)

[Ja, hoor, vast.]

(165) 13 - Global Network to Global Mind

"In the near future, we can expect to have instant access to the sum of the world’s publicly stored knowledge. Next, “telepresence” will extend all of our senses with such fidelity that we will no longer need to transport our bodies around the world to experience all the sensations of being anywhere we desire. Interpersonal relations-at-a-distance will be as intimate as we can imagine."(166)

[Dat soort voorspellingen hebben we wel genoeg gehad en niet zien uitkomen. Het is kritiekloos. Alsof virtueel contact ooit hetzelfde kan zijn als fysiek contact.]

"The more sophisticated their communication networks, the more tightly individuals are connected, and the more societies look like collective organisms with nervous systems."(166)

"What about the future? Will the global nervous system evolve into an intelligent and conscious global brain? If some kind of global intelligence emerges, then what will become of the individual? Will the evolution of a community mind mean the end of individuality? Will human rights, values, and even human life be devalued? "(167)

"Eventually, the functions of an intelligent global learning machine could transcend the thoughts and values of the individuals connected to it."(168)

"So far, we have viewed the global brain merely as an extension of human thought — a servant of humankind. But you may already be asking: Would a global brain need people to function? Could such a global brain become autonomous and even conscious? Many technologies evolve as extensions of older technologies, then discard the old technology to take on lives of their own."(170)

[De ene vage uitdrukking na de andere. Georges begint steeds meer te kletsen.]

(173) 14 - Will Machines Take Over?

[Antwoord: Nee. ]

"Computers are beginning to take over our personal finances, as well."(176)

[Let weer op het taalgebruik. Tegenover bijvoorbeeld zoiets als: We gebruiken computers steeds meer om onze financiële zaken te regelen.]

(183) 15 - Why Not Just Pull the Plug?

[Omdat dat een veel te simpele voorstelling van zaken is. Ook hier loopt de taal met je weg als je zoiets vraagt.]

"Even if you could unplug an intelligent machine, the next question is Should you? Our discussion of moral machines in Chapter 12 suggested that they may have rights and responsibilities, too, and that our interactions with them should be guided by moral and ethical codes similar to those that guide our interactions with each other. If unplugging an intelligent machine, even a threatening one, destroys information in its “brain,” would doing so be equivalent to murder? "(186)

[En meer van dat onrealistische geklets.]

(189) 16 - Cultures in Collision

"Humans instinctively fear the Other — the stranger from the next village, the alien from across the border or across the sea. We naturally act cautiously around strangers. Our genetic program makes us wary of anyone whose physical appearance or manner of dress differs even slightly from our own. The persistence of racial and ethnic subcultures in the United States, and the tensions that sometimes result, attest to the power of xenophobia, or fear of outsiders." [mijn nadruk] (189)

[Oh, van nature en genetisch, goh ... Ik kan echt niets met dit soort stellingen. Die zijn blijkbaar afkomstig uit een cultuur van wantrouwen. Met andere woorden: niks aan te doen, racisme zal blijven bestaan. De rest van het hoofdstuk bestaat weer uit het inmiddels bekende geklets.]

(197) 17 - Beyond Human Dignity

"We are, of course, revisiting the question of free will that we first brought up in Chapter 7. Here, we ask how our human dignity, or sense of self-worth, depends on the belief that our choices are freely made by an autonomous inner being. The way we answer this question will determine how much we respect future intelligent machines and the level of rights and responsibilities we would accord them. "(197-198)

Over Weizenbaum.

"His solution was to restrict the expansion of artificial intelligence, forbidding research into certain areas that he thought were the exclusive domains of human beings, including, ironically, psychotherapy. As we know, Weizenbaum’s solution was not implemented, and research in artificial intelligence continues to expand, unhindered by any regulations or other restrictions designed to preserve and protect human dignity."(198)

[Waarna weer de bekende vage vragen volgen.]

"As for loss of love, understanding, and interpersonal respect, does the information age make it easier to do without these values? Perhaps, but whether we treat each other like humans or machines is still an individual choice. The new technologies at our disposal could just as easily be used to close gaps between cultures and increase understanding as to separate and keep them at a distance. Which path shall we take? "(199)

[Alsof er geen kapitalistische samenleving is die die keuzes in bepaalde richtingen stuurt. Alsof technische oplossingen alle kanten uit kunnen gaan die wij als individu willen.]

"But isn’t there a more important lesson to be learned from all of our misbegotten uses of technology? Excluding from science the study of human values condemns us to dealing with twenty-first-century technology, equipped only with rigid moral, religious, and ethical codes that have changed little in most of the world since the Middle Ages. If the human race is to survive its technological adolescence, then maybe it’s time to apply our scientific methods to rethink the bases of human ethics and morality."(199-200)

[En dan komt Georges weer aan met Skinners behaviorisme en met evolutionaire psychologie. Alsof we via die weg een beter soort waarden en normen kunnen aanleren. Hoe? Met conditioneringstechnieken? Een elektrische schok als we iets fout doen, toegediend door behavioristische ouders of leraren? Alsof je de omgeving zo maar kunt aanpassen.]

"Skinner had a (some say utopian) vision of rationally designing cultures in which suitable environmental controls (which he unfortunately called behavioral engineering) and incentives would operate on our brains to produce peace, harmony, and productive growth. Before we would ever accept such controls, however, he thought that we would have to abandon our cherished “values” of freedom and dignity altogether. "(202)

[Dat is - op zijn zachtst gezegd - naïef.]

"Perhaps the best we can do is reconstruct the values we pass along to future generations. As the emphasis gradually shifts toward restructuring our values and environment, new fears and concerns will arise: Who is to construct the controlling environment, and to what end? Of course, no one knows the answers to these sticky questions yet — the science of human behavior is still in its infancy. Eventually, we may learn enough about cultural design to produce certain desired results.(...) But when a science of human values matures a little more, we may find better goals and rules for getting there. Such rules would not destroy freedom, responsibility, or any other mystical quality. They would simply make the world safer and more rewarding for everyone." [mijn nadruk] (207)

[Maar het geloof is er en dat is minstens even naïef. Zoal uit het tweede deel van het citaat blijkt.]

(209) 18 - Extinction or Immortality?

"A common sci-fi theme supposes that our human intellects — the entire electrical contents of our brains — could be “downloaded,” bit by bit, into a more durable machine, al- lowing our minds and consciousness to far outlive our biological bodies. "(210)

En dus worden mensen als Moravec besproken en de theorie dat we ooit in een computersimulatie zullen leven en het posthumanisme.

"If we can design human beings, why make any more of the greedy, violent, barbaric, self-absorbed kind? Why not nicer, testosterone-free, superhuman beings? Or entirely different kinds of intelligent life altogether? There is certainly no shortage of examples of desirable human attributes and powers in our history that we could nurture and amplify, just as there are many that we could better do without. This is where the difference between extinction and immortality blurs. "(212)

(217) 19 - The Enemy Within

"The greatest threat to our dignity and our humanity will not come from machines that act like people, but from people who act like machines. The more we allow ourselves to be told what to think and to be treated like automata by other people, by governments, corporations — and yes, by computers — the more vulnerable we become to domination by intelligent machines. "(217)

"When we lose our courage to ask questions, when we let others do our thinking for us, when we allow our ideas to be muzzled by political correctness, we relinquish our humanity, not just to tyrants and demagogues, but to bureaucrats, politicians, the media, advertising, military paranoia, and religious dogma. "(218)

"Symptoms of groupthink are feelings of invulnerability, being oblivious to consequences, not being accountable for actions, hostility toward nonbelievers, and suppression of individual feelings and sensibilities. Criticism is suppressed as disloyal. Groupthink works because we all feel safer and more willing to take risks when the risk is shared — when “everyone else is doing it.” "(219)

(227) 20 - Electronic Democracy

"Among even the most liberal voters is a widespread fear that teledemocracy would exclude large sectors of society, mainly the poor and the technophobic, from participation. Like the paperless office, teledemocracy seems as distant a promise as ever. Even so, information technology has yielded expanding dividends for democracy in ways that could not have been foreseen before the Internet. "(227)

"But if more individuals get involved in decision making, would the result be consensus or chaos? Could intelligent machines bring order to the process? Could they help manage the flood of information that citizens need to process to keep abreast of issues? And by assuming the role of supernegotiators, could they reconcile diverse interests, moderate complex discussions, and help reach consensus? "(228)

[Weer zo'n naïeve insteek. Programma's om kiezers te helpen bij hun keuze zijn er al en zetten niet echt zoden aan de dijk. De rest is weer geblaat over wat een machine allemaal zou kunnen doen.]

"The information age opens the door to cyber-crime, cyber-terrorism, and cyber-warfare, against which we will need to develop elaborate defenses. Controlling access to dangerous knowledge (and deciding which knowledge is dangerous), while preserving the rights to free expression, will be one of the great challenges of future democracies. "(231)

[Ja, maar die controle kun je niet zonder meer aan machines overlaten. Er zijn altijd maatstaven die mensen daarvoor invoeren en waaraan die algoritmen zich moeten houden. Dat mensen meer informatie kunnen bereiken, dat je anoniem dingen kunt stellen ook al is het bullshit, het is al bekend en een groot probleem. Hoe kan een computer weten dat er ergens sprake is van desinformatie zonder dat daar mensen aan te pas komen?]

"Even corporations and government agencies combat efforts to open up communication among employees, fearing open criticism of policies and loss of control. But open communication should ultimately dissolve government and corporate secrecy and give ordinary people more say in their government, their workplace, and the marketplace. But this opening-up will probably not occur “within the system,” as many expect, because the so-called system has too many incentives to keep things closed. Corruption and abuses of power will most likely be fought by being exposed to the light of public scrutiny. Governments and corporations that cannot function in the open will fall by the wayside. " [mijn nadruk] (233)

[Dat is wel heel erg 'wishful thinking'. Het onderschat nogal de machtsverhoudingen, vind ik. Ook Wikileaks en dergelijke blijken het systeem niet erg aan te tasten. Af en toe een schandaal, ja, maar het systeem blijft zoals het is.]

"The technology of the Internet will almost certainly lead to the end of intellectual property rights as we know them."(233)

"Legal challenges and encrypted security schemes will produce temporary setbacks, but free information seems destined to become nearly universal."(234)

"The social, legal, moral, and ethical implications of open information systems are astounding: Because a large chunk of our economy is based on the concept of intellectual property rights, many people will oppose any scheme that denies artists and other content creators much of the compensation that they deserve for their work. You will hear more and more arguments that open information systems are immoral and socially and economically disruptive, and those who make their living this way will push legislation to outlaw these schemes. But because this technology is inevitable, our traditional concepts of intellectual property rights will have to be rethought to accommodate it. In the future, information may well cease to be regarded as property. "(234)

[Daar lijkt het niet erg op. Het is wel zo dat er anders omgegaan wordt met intellectueel eigendom en dat er allerlei 'providers' zijn ontstaan die bepaalde media toegankelijk maken tegen betaling. Maar in principe tast dat het idee intellectueel eigendom niet aan. Integendeel, zou ik zeggen. Rechtenkwesties spelen een enorme rol in wat er bij die 'providers' aangeboden mag worden. Vrije informatie op die vergaande manier is in een kapitalistische samenleving simpelweg een illusie. Technische middelen staan er namelijk niet los van. Er worden gewoon technische middelen verzonnen die passen binnen de bestaande kaders zoals Spotify of Netflix.]

(235) 21 - Rethinking the Covenant Between Science and Society

"Our experiences with nuclear weapons remind us that such warnings inevitably go unheeded, and that new technologies always take on a life of their own, resisting all attempts to coax them back into their bottles. A sensible policy would be to think a lot more about conse- quences up front, rather than playing it by ear as we go along. This kind of sensibility, however, seems contrary to human nature. " [mijn nadruk] (235)

[Weer dat fatalisme omdat iets gekoppeld wordt aan 'de menselijke natuur', wat dat ook moge zijn.]

"This chapter, therefore, explores some seemingly dysfunctional aspects of our covenant between science and society, in the hope of finding something we can actually do to lessen our anxiety about accelerating technological change — including, but not limited to, AI. "(236)

"When a technology starts to get us into trouble by raising such sticky moral and ethical dilemmas, the usual response is opposition to the technology itself, rather than a call for clear and careful thinking about new social, moral, and ethical structures to accommodate it. Must our moral and ethical codes lag behind our science and technology, always leaving us trapped in ethical dilemmas? Or is there a better way? "(239)

Kijk eens naar hoe wetenschappers en onderzoekers worden opgeleid, zegt Georges. Dat gebeurt met allerlei bekende simpele opvattingen en niet met een insteek gericht op sociale verantwoordelijkheid.

"In fact, the logical consequence of this ivory-tower mentality is a laissez-faire science policy: Leave me alone to discover how nature works. What society does with my discoveries is its business. Scientists often say that their job is to dispassionately present the scientific facts to policymakers (or at least to their employers), and that moral judgments lie outside science’s domain.(...) This logic is strangely compelling to scientists and technologists who simply don’t want to be bothered with the social, moral, and ethical implications of their work. "(240)

"Joy’s critics insist that history shows that science and technology flourish and are always most prolific and fruitful in a free setting. Maybe so, but these rebuttals merely defend the scientific status quo, and none really address Joy’s warnings. "(242)

"The financial rewards are too great to be upstaged by moral and ethical issues, which will no doubt be cleaned up after the fact."(243)

"If not the marketplace, if not the defense establishment, and if not the scientists themselves, then what forces might steer the research of scientists and the inventiveness of technologists in directions more likely to benefit humanity as a whole? How can we inject enough social wisdom and responsibility into a process that seems corrupted by political influence and for which profitability provides such strong incentives? "(248-249)

[Maar op die vraag wordt de hele tijd geen antwoord gegeven.]

"The practice of institutionalized science tends to be less about acquiring knowledge in the service of humankind, and more about the survival, and even the enrichment, of its practitioners. Ethical concerns have shifted from the usual academic wrongdoing, such as falsification of data and plagiarism, to the misuse of sponsors’ funds and even deceiving the public. "(251)

"Here, then, are some specific ways the science community could regulate itself and assure its relevance to the society that supports it:
> Critically examine the connections between scientific research and the goals of society itself. This, of course, assumes that we can agree on, and have clear pictures of, those goals.
> Integrate research in the physical sciences with the desired economic and behavioral outcomes at the earliest stages of R&D projects.
> Replace secrecy and scientific arrogance with public dialogues about what types of scientific knowledge should be pursued. Open up science and technology policy to public input.
> Replace the mentality of undirected economic growth with the idea of sustainable development. This means tuning our science and technology to balance the developmental needs of the present with those of future generations.
> Prepare scientists and technologists more carefully for the social, economic, or political problems their craft is expected to address. We need more eloquent advocates for science and more scientists who are also educated in policy, law, economics, ethics, and communication. "(252)

[Veel vage suggesties. Maar in ieder geval een poging tot een antwoord op de gestelde vragen.]

"The emergence of intelligent machines will surely present us with many novel ways to destroy ourselves. To prevent this from happening, we will have to think very carefully about what kinds of intelligent machines we want, which values will form their top-level instructions, and how to retain control. These decisions will require a critical and dispassionate (that is, scientific) examination of how well our own human values have served us. To do so, I believe that a science of human values is not only possible but imperative. By admitting the study of human values into the realm of legitimate scientific inquiry, we could bring all the critical investigative tools of science to bear on how our values shape our technological choices — for better or worse. The goals of a science of human values would be to apply our critical scientific methods to understanding the bases of human ethics and morality, to rethink societal values and beliefs that seem dysfunctional in today’s (and tomorrow’s) world, and to design new values based on reason. "(253)

[Daar komt Georges niet voor de eerste keer mee. Maar wat die wetenschap moet voorstellen blijft toch maar erg onduidelijk. Ook al zou het onderzoek naar bepaalde waarden zijn zoals ze zijn - wat er trouwens al is - dan zegt dat weinig over de waardering van die feiten over waarden en over hoe de mensheid dan zou moeten leven. Ik zie bij hem steeds behaviorisme en evolutionaire psychologie opduiken, alsof dat helpt. Filosofie - zelfs ethiek - noemt hij nooit, alsof je wetenschapper moet zijn om over waarden en normen te kunnen nadenken.]

(257) 22 - What About God?

"It is sensible to ask what harm there is in allowing religious beliefs to comfort us, even though they may not be scientifically verifiable or make strict logical sense. The harm lies in the grip that religious thinking holds on people’s minds. "(262)

"If religion fails to explain anything useful about nature, if it seems so much like mind control, and if it has so many disruptive social effects, then what of value is left? In what sense could religions possibly be good for people or societies today? "(262-263)

[Op geen enkele manier. We hebben religie simpelweg niet nodig. Dat vindt Georges denk ik ook, maar hij is toch weer erg slap in het innemen van een standpunt.]