Wednesday Book Club: The Emotion Machine

Wednesday, 20 August 2008 — 2:00am | Book Club, Computing, Literature, Science

This week’s selection: The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind (2006) by Marvin Minsky.

In brief: In The Emotion Machine, AI pioneer Marvin Minsky presents his theories on the “big-picture” questions pertaining to the human mind—emotions, consciousness, common sense—in plain English and easy-to-follow diagrams, but one wonders if he goes too far in distilling his ideas for a layman’s audience, at the cost of the specificity and rigour that readers from a more technical background may demand. Minsky’s most insightful philosophical premises appear as corollaries and implications, and beg for further development. Nevertheless, the book fulfills its purpose as an expressly non-technical overview of how one might develop models for decomposing higher-order thought into manageable representations.

(The Wednesday Book Club is an ongoing initiative of mine to write a book review every week. I invite you to peruse the index. For more on The Emotion Machine, keep reading below.)

The Emotion Machine is a puzzling beast. From the introduction’s epigraph (“Nora Joyce, to her husband James: ‘Why don’t you write books people can read?'”), we can detect that Minsky’s objective is to present an inquiry of the mind’s workings in the plainest terms he can manage. He appeals to cultural quotations, anticipates objections as if taking questions in a public lecture, and sustains his models by decomposing paragraph-length scenarios and speculating on the machinery behind every action. There are no equations and no implementation details, and nothing more technical than a lot of high-level diagrams of boxes and arrows, which are certainly readable as a rough architectural basis for AI design, but are never identified as such.

Is this a book about artificial intelligence? Yes, but indirectly. There are some who divide the ambitions of AI research into four quadrants of goals: machines that think like humans, act like humans, think optimally/rationally, or act optimally/rationally. Much of what goes on in the design of intelligent systems is focused on optimality, or the development of superhuman performance in, for example, games of strategy or the delivery of relevant search results. In The Emotion Machine, Minsky’s primary interest is in the workings of the human mind that philosophers and laymen alike tend to view as being above the trappings of mere logic: emotions, common sense, consciousness, and the sense of self.

The bearing that the book has on AI originates from its premise that any these things can be reproduced in the form of abstract, descriptive models—which, by implication, can be implemented in a “hardware” layer other than the human brain, like a sufficiently advanced computer system. But The Emotion Machine is, from beginning to end, a book about human mental behaviour; the consequences for AI are dealt with as corollaries at most.

Is it a book for scientists? Almost certainly not: Minsky’s methodology is self-consciously unscientific. Occam’s Razor, he explains, may have led to a compact set of laws in physics, but the quest for a similar set of simplest possible explanations has done a great disservice to cognitive psychology, chaining it to the assumption that big, general words like “consciousness” correspond to singular entities that we can study and describe.

This is how Minsky justifies the complexity of a model of mental activity that involves no less than seven layers, or Ways to Think—instinctive reactions, learned reactions, deliberative thinking, reflective thinking, self-reflective thinking, and self-conscious emotions—where each layer reflects upon and adjusts the decision-making pathways of the layer immediately beneath it. Minsky likens this model to an elaboration of the Freudian axis that proceeds from id to ego to superego, with low-level drives on the bottom and high-level ideals on top. He is careful to emphasize that, were the mind a one-layer system, like a chaotic jumble of resources without any connective hierarchy, then we simply wouldn’t be able to function as expediently as we do. We ignore details all the time, and direct our consciousness towards higher-level instructions and decisions.

Minsky goes on to claim that our habit of high-level thought is responsible for the semantic ambiguity of “suitcase words” like “emotions”, “consciousness” or “the self”, which we commonly identify as monolithic sources for all sorts of concepts and operations, some of which are only distantly related. For Minsky, there is no such thing as the self: it is a convenient way for us to perceive ourselves so we don’t need to preoccupy ourselves with what goes on under the hood.

Of course, Minsky’s own models are themselves just convenient ways for us to perceive ourselves, albeit at a level of compartmentalized detail from which we can extract specific mental operations for the sake of isolated analysis. Smaller packages tell us more than big ones.

Nevertheless, the basis of this line of argument—that we think of ourselves monolithically using words like “consciousness”, but don’t have a vocabulary to reflect the structure of consciousness as a profound disunity—rests on a precarious foundation of linguistic determinism. The polysemous nature of words, particularly the words we have for high concepts, doesn’t imply that we pack all of the connotations together at once. As Minsky well knows, suitcase words don’t achieve specificity only upon disassembly, but often derive a very precise acuity of meaning from their context. The ambiguity of monolithic words correlates to, but certainly doesn’t cause, the ambiguity of monolithic thought.

What I do appreciate is Minsky’s subtle sideswipe at the discipline of philosophy, the practitioners of which commonly insist that systematic explanations of things like the mind don’t really answer their questions. I have long believed that philosophy continues to thrive as a field of questions that philosophers insist have not been answered, even in the face of more concerted efforts to the contrary. This is what we witness whenever someone like John Searle comes buzzing around the AI lab saying, yes, but that’s not really consciousness. Anything answerable, or suited for analytic discussion, immediately falls out of the discipline’s overflowing baggage claim of suitcase words. Minsky is quite correct to acknowledge that the philosophical question of what the mind actually is, or how it actually works, is a dead end of perpetual sophistry. We are much better served by trying to build models of what we do when we think, feel, perceive, or self-perceive, as we continue to refine better ways to understand ourselves.

Previous:
Next:

submit to reddit

One rejoinder to “Wednesday Book Club: The Emotion Machine