November 23, 2005

Trees & Rhizomes

The age-old debate between rationalism and empiricism is unresolved and ongoing. Contrary to the popular conception of philosophical clashes, the debate is actually about something and, perhaps even more surprisingly, concordance could come through attempting to find a solution to a very practical problem: the creation of strong AI.

In the past, the conflict was restricted to treatise bombardments in the lofty heights of philosophy. Now it is being decided down on the ground, less dramatically, by cognitive scientists and neuroscientists who patiently train and feed neural nets in cognitive science labs, and by neurologists and neuropsychologists who test the linguistic output of people who have suffered tragic damage to their brains.

Key texts online:
The Past Tense Debate (Pinker & Ullman; McClelland & Patterson 2002) - referred to as PTD
Words & Rules (Pinker 1998)
On Language & Connectionism (Pinker & Prince 1988)
Stanford Encyclopedia Entry: Connectionism

Other references:
Words & Rules, Pinker (1999), WR
The Singularity is Near, Kurzweil (2005) TS

In Words and Rules, Pinker maps the battlefront of the modern conflict: in the rational camp sit Leibniz, Descartes Hobbes, Humboldt and Chomsky; in the empirical camp sit Hume, Locke, Pavlov, Skinner and the connectionists David Rumelhart and James McClelland:

“The idea that intelligence arises from the manipulation of symbols by rules… When the symbols stand for words and the rules arrange them into phrases and sentences, we have grammar… When the symbols stand for concepts and the rules string them into chains of inference, we have logic, which became the basis for digital computers, the artificial intelligence systems that run on them, and many models of human cognition.” (98 WR)

“The mind connects things that are experienced together or that look alike… and generalizes to new objects according to their resemblance to known ones. Just as the rationalists were obsessed by combinatorial grammar, the associationists were obsessed by memorized words… John Locke pointed to the arbitrary connection between words and things as the quintessential example of how the mind forms associations by contiguity in time... Replace the ideas with ‘neurons’ and the associations with ‘connections’ and you get the connectionism of David Rumelhart and James McClelland.” (99WR)

Interestingly, Pinker and Kurzweil occupy a middle ground between these two polarities.

Pinker: Words & Rules

Pinker is strongly allied to the innatist side of the fence, but has adapted and streamlined his model of language learning and language processing to accommodate evidence provided by the Rumelhart-McClelland connectionist model. Pinker’s ‘word and rules’ (WR) hypothesis is a compromise which adapted in response to data from empirical tests.

The Rumelhart and McClelland Parallel Distributed Processing (PDP) model proved the extent to which blank neural nets can be trained to learn and generalize from previous input and feedback when presented with fresh data:

“Rumelhart and McClelland trained their network on a list of 420 verbs presented 200 times, for a total of 84,000 trials. To everyone’s surprise, the model did quite well, computing most of the correct sound stretches for all 420 verbs. That meant that a single set of connection strengths was able to convert 'look' to 'looked', 'seem' to 'seemed', 'melt' to 'melted', 'hit' to 'hit', 'make' to 'made', 'sing' to 'sang' and even 'go' to 'went'. Then Rumelhart and McClelland challenged the network with 86 new verbs, which it had not been trained on… The model offered the correct past-tense form with –ed for about three quarters of the new regular verbs, and made reasonable overgeneralization errors such as 'catched' and 'digged' for most of the new irregulars.

Even more impressively, the model mimicked some of the tendencies of children as they acquire English. At one point in training it produced errors such as 'gived' of verbs that it had previously produced correctly. It also analogized new irregular verbs to families of similar sounding old irregular verbs; for example it guessed 'cling-clung', 'sip-sept', 'slip-slept', 'bid-bid' and 'kid-kid'…” (WR120-1)

However, amongst other weaknesses, the PDP model is limited by the fact that all it does is associate sounds with sounds, which means it has great problems processing words which have unfamiliar sounds (the network produced membled as the past tense of mail because it was not familiar with ail), whereas humans quite happily apply the –ed ending to produce the past tense for new verbs, as long as they know the word concerned is a verb (e.g. no hesitation in turning 'text' into 'texted'). Also, having no mental symbols for morphological units, such as prefix, verb stem or suffix, the PDP model is unable to apply recursive rules, such as “a stem can combine with a prefix to form a new stem”, so that 'out' can combine with 'strip' to produce 'outstrip'. Positing symbolic tree structures and innate grammatical machinery is a more plausible and economical way of accounting for regularity in language and for its acquisition:

“The phonemes are held in their correct order by a treelike scaffolding that embodies the morphological structure of the word (how it is built out of stems, prefixes and suffixes) and the phonological structure of its parts (how they are built out of chunks like onsets, rimes, vowel nuclei, consonants and vowels, and ultimately features). The similarity to other words such as strip, restrip, trip, rip and tip falls mechanically out of the fact that they have identical subtrees, such as an identical ‘stem’ or an identical ‘rime.’ And computing the regular past-tense form is nothing but attaching a suffix next to the symbol ‘verb stem’: 'outstripped'. [there should be a nice Chomsky tree diagram here, but I can't paste it in.]

The WR theory is a “lexicalist compromise between the generative and connectionist extremes.” (PTD2)

“Regular verbs are computed by a rule that combines a symbol for a verb stem with a symbol for the suffix. Irregular verbs are pairs of words retrieved from the mental dictionary, a part of memory. Here is the twist: Memory is not a list of unrelated slots, like RAM in a computer, but is associative, a bit like the Rumelhart-McClelland pattern associator memory. Not only are words linked to words, but bits of words are linked to bits of words… The prediction is that regular and irregular inflection are psychologically, and ultimately, neurologically distinguishable.” (WR 131-2)

The WR hypothesis is now itself being subjected to rigorous testing. Possible proof that the brain handles regular and irregular verbs in different areas and by different operations might come from studies of people who suffer from aphasia and anomia. The former can result from damage to the areas around the Sylvian fissure and Broca's area, and causes agrammatism, whereas anomia is “a difficulty in retrieving and recognizing words,” which results from damage to the posterior parts of the brain (WR 275-6). Studies of Alzheimer’s disease, Parkinson’s disease and the Specific Language Impairment caused by the FOXP2 mutant gene seem to support to the WR hypothesis, but McClelland and Patterson question the evidence. (PTD15)

What is clear from this is that paper and pencil analysis is now utterly insufficient on it own: the debate increasingly concerns interpretation of data and critiques of data gathering methods.

[for more on FOXP2: www.well.ox.ac.uk/~simon/SPCH1/SPCH1_project.shtml]

Kurzweil

For Kurzweil, neural nets are just one tool in the vast panoply of technologies aiding and abetting the creation of strong AI. The capacity of well-trained neural nets to learn and self-organize is one of the promises they hold out. Neural nets are distinctly rhizomatic in the way that they bootstrap from the bottom up.

“The key to a neural net… is that it must learn its subject matter. Like the mammalian brains on which it is loosely modeled, a neural net starts out ignorant. The neural net’s teacher – which may be a human, a computer program, or perhaps another, more mature neural net that has already learned its lessons – rewards the neural net when it generates the right input and punishes it when it does not. This feedback is in turn used by the student neural net to adjust the strengths of each interneuronal connection. Connections that were consistent with the right answer are made stronger. Those that advocated a wrong answer are weakened. Over time, the neural net organizes itself to provide the right answers without coaching. Experiments have shown that neural nets can learn their subject matter even with unreliable teachers. If the teacher is correct only 60 percent of the time, the student neural net will still learn its lessons.

A powerful, well-taught neural net can emulate a wide range of human pattern-recognition faculties. Systems using multilayer neural nets have shown impressive results in a wide variety of pattern-recognition tasks, including recognizing handwriting, human faces, fraud in commercial transactions such as credit-card charges, and many others. In my own experience in using neural nets in such contexts, the most challenging engineering task is not coding the nets but in providing automated lessons for them to learn their subject matter.” (TS 271)

The other promise is that of parallel processing:

“Neural nets are also naturally amenable to parallel processing, since that is how the brain works. The human brain does not have a central processor that simulates each neuron. Rather, we can consider each neuron and each interneuronal connection to be an individual slow processor. Extensive work is under way to develop specialized chips that implement neural net-architectures in parallel to provide substantially greater throughput.” (TS270)

Kurzweil’s critique of Searle’s Chinese Room argument appeals to the variety of techniques which can be used in computing:

“A failure to see that computing processes are capable of being – just like the human brain – chaotic, unpredictable, messy, tentative, and emergent is behind much of the criticism of the prospect of intelligent machines that we hear from Searle and other essentially materialist philosophers. Inevitably Searle comes back to a criticism of ‘symbolic’ computing: that orderly sequential symbolic processes cannot recreate true thinking. I think that is correct (depending on what level we are modeling an intelligent process), but the manipulation of symbols (in the sense that Searle implies) is not the only way to build machines, or computers.
… Nonbiological entities can also use the emergent self-organizing paradigm, which is a trend which is well under way and one that will become even more important over the next several decades…
… The primary computing techniques that we have used in pattern-recognition systems do not use symbol manipulation but rather self-organizing methods… A machine that could really do what Searle describes in the Chinese Room argument would not merely be manipulating language symbols, because that approach doesn’t work. This is at the heart of the philosophical sleight of hand underlying the Chinese Room. The nature of computing is not limited to manipulating logic symbols. Something is going on in the human brain, and there is nothing that prevents these biological processes from being reverse engineered and replicated in nonbiological entities…
… Of course, neurotransmitter concentrations and other neural details have no meaning in and of themselves. The meaning and understanding that emerge in the human brain are exactly that: an emergent property of its complex patterns of activity. The same is true for machines. Although ‘shuffling symbols’ does not have meaning in and of itself, the emergent patterns have the same potential role in nonbiological systems as they do in biological systems such as the brain. Hans Moravec has written, ‘Searle is looking for understanding in the wrong places…[He] seemingly cannot accept that real meaning can exist in mere patterns.’” (TS460-4)

The most plausible and workable models for cognition and language are emerging through syntheses, which can be seen as part of a more far-ranging tendency towards consilience.

Whilst a certain degree of consensus is emerging, there are forks ahead and different directions are being taken. One camp is reverse engineering the human brain in order to better understand human psychology: the primary aim is to discover fundamental truths about ourselves. The camp which has put all its eggs in the strong AI basket is more concerned with what is useful than what is true: the fundamental aim is to overcome the limitations that keep us imprisoned in what we are and prevent us from becoming what we could become. Both trajectories lead inexorably to political implications, the surface of which have barely been scratched.

Reverse engineering will also inevitably lead to clashes over human nature. Evolutionary psychology stresses that the most complex and mysterious components of the human brain, the emotions, are adaptations which evolved over millions of years of gradual fine-tuning. In contrast, no sooner does Kurzweil find out about the deep interconnectedness of spindle cells, which are intimately involved with the emotions, than he notes how few they are in number and puts them on the list of things to be reverse engineered and simulated in the next couple of decades: “It will be difficult… to reverse engineer the exact methods of the spindle cells until we have better models of the many other regions to which they connect. However, it is remarkable how few neurons appear to be exclusively involved with these emotions… only about eighty thousand spindle cells dealing with high-level emotions.” (TS194)

Kurzweil’s glaring weakness, and the source of his irrepressible optimism, is that he grossly underestimates the sophistication and intransigence of evolutionary programming and strategies.

The debate is still unfolding, rather than raging, with a surprising degree of politeness (the Pinker vs. McClelland clash is extremely civilized). Tools and results change hands in the process of fine tuning. As the brain is precision re-engineered there is increasing cross-feed from different, previously opposed or unrelated disciplines. At the moment there is no no-mans land: the midpoint between rationalism and empiricism is actually a zone of constructive research and innovation, at least at the moment.

Posted by sd at November 23, 2005 01:45 AM | TrackBack

 

 


On-topic:

have you checked out andy clarks stuff on this as well?

www.philosophy.ed.ac.uk/staff/clark/publications.html#language

"The human mind, I wanted to argue, is naturally
designed so as to co-opt a mounting cascade of extra-neural elements
as (quite literally) parts of extended and distributed cognitive
processes. Moreover (and hence the techno-futurism) this ancient
trick looks poised for some new and potent manifestations, fueled by
innovative work on human-machine interfaces, swarm intelligence,
and bio-technological union."

Posted by: hyperflow at November 23, 2005 03:50 PM

 

 

Thanks - investigating.

Posted by: sd at November 23, 2005 08:08 PM

 

 

"The robot that thinks like you... 05 November 2005

...The infant I am watching wander around its rather spartan playpen in the Neurosciences Institute (NSI) in La Jolla, California, is a more limited creature. It is a trashcan-shaped robot called Darwin VII, and it has just 20,000 brain cells. Despite this, it has managed to master the abilities of a 18-month-old baby - a pretty impressive feat for a machine.

Darwin VII is the fourth in a series of robots that Jeff Krichmar and his colleagues at NSI have created in a quest to better understand how our own brains work - the first three versions of Darwin did not have a real robotic body to control. Darwin VII allows Krichmar to record changes in hundreds of thousands of its brain's neural connections as it explores and learns, to test neuroscientists' theories of how real brains work. "This is something that you can't do in a real brain," Krichmar points out.

The key to Darwin's abilities is its brain. This is an amalgam of rat and ape brains, encoded in a computer program that controls its actions. Darwin tastes blocks by grabbing them with its metal jaws to see if they produce electricity. It likes the ones that do and dislikes the ones that don't. Within half an hour of being switched on it learned to find the tasty blocks.

If Krichmar and others like him succeed, robots like Darwin might one day be seen as the ancestors of something much bigger. Some researchers, and even the US Defense Advanced Research Projects Agency, are gambling that robots like Darwin will be the forebears of an entirely new approach to artificial intelligence (AI): building intelligent machines by copying the structures of living brains. Some groups are even designing microchips that could eventually be used to build anatomically realistic artificial silicon brains to replace the computers that power existing robots like Darwin.

The dream is that these new brains, embedded in robotic bodies of silicon and steel, will go to a level beyond today's artificial intelligence systems. By sensing their environments as they explore and learn, they will develop the ability to survive in the constantly changing real world of imperfect information that we navigate so effortlessly, but which computers have yet to master.

...These systems will arise, say the researchers, by emulating the brain's neurons and the way they are connected to each other. In animal brains neurons are linked to form huge reconfigurable networks that behave like filters, transferring, modifying or blocking signals that they receive. Though living brains have been studied for decades, we still don't know exactly how they achieve the amazing abilities of the human mind.

It all boils down to this: existing artificial neural networks, such as those used in many computer systems today, are totally inadequate for creating anything resembling animal, let alone human, intelligence. To do that, you have be as faithful as possible to the real thing. And for the first time that's what several groups around the world are trying to do: emulate both the structure and the function of living brains in detail.

In all neural networks, both artificial and real, structure and function are intimately linked. The pattern of connections between neurons determines how well the network performs a particular task. If you train an artificial neural network to recognise abnormal cells in smears test, for example, it adapts by adjusting connections between individual neurons until external feedback indicates to the network it is doing the job well. But unlike the human brain, these systems are optimised to perform a single task. "It is a small part of what might be happening in the brain, a tiny portion of an intelligent action," says Igor Aleksander of Imperial College London.

To get the adaptive, flexible behaviour you see in animals, you need to imitate the design of a whole brain, the body it lives in and the drives that motivate it, Krichmar says. "A brain-based device provides them all; a traditional neural net simply doesn't."

Neuroscientists have identified hundreds of different neural areas within mammalian brains. In effect each is a speci*lised neural network unto itself. It is only when you recreate these areas and start interconnecting the different modules that complex behaviour emerges that no single part of the system could achieve on its own, Aleksander says.

Darwin is a work in progress. The biological data and computing power necessary to build such a machine are only now becoming available. Huge gaps remain in our understanding of the human brain, so a team lead by Olaf Sporns, a neuroscientist at Indiana University, Bloomington, has proposed a project inspired by the Human Genome Project to map the neural connections throughout the human brain (PLoS Computational Biology, vol 1, p 42).

But it is going to take more than just simulating neural networks in software to make significant progress towards genuine new forms of artificial intelligence. Brain-based systems run very slowly on computers because brains and computers work in fundamentally different ways. Conventional computers funnel their calculations through one or a few processors at best, whereas mammalian brains distribute calculations across billions of neurons that operate in parallel. To get a significant improvement in speed, and therefore capability, new hardware will be needed that can imitate the way brains compute.

To this end, some researchers have begun developing silicon devices that imitate the behaviour of real neurons. Their processing units behave like neurons in that they respond to inputs of different value with a range of output values, rather than just switching on or off as in conventional computers. The chips can even change the interconnection between processors in real time, something that is impossible with existing microchips..."

www.newscientist.com/channel/info-tech/mg18825241.700.html;jsessionid=DBLAEAJFIFNH

Krichmar: www.nsi.edu/nomad/pubs.html
Olaf Sporns: www.indiana.edu/~cortex/publications.html

Posted by: sd at November 23, 2005 11:19 PM

 

 

Don't want to distract back to an old topic, but why is Hobbes in the rationalist camp? Isn't he hyper-empirical in orientation? 'all reasoning is reckoning ...' etc.?

Posted by: Nick at November 24, 2005 12:40 AM

 

 

"Hobbes uses 'reckoning' in the original sense of counting, calculating, or computing. For example, suppose the definition of 'man' is 'rational animal.' Then if we are told that something is 'rational' and an 'animal' (names of parts) we could deduce it is a 'man' (name of whole), and if we are told that spmething is a 'man' (name of whole) and that it is 'rational' (name of one part) we can deduce that it is a rational 'animal' (name of the other part). These steps could be laid out as mechanical instructions to recognize and copy words, a kind of symbol, and therefore could be 'reckoned' or computed by someone who has no idea what the concepts 'rational' and 'animal' even mean. If the symbols are patterns in the brain rather than words on a page, and the patterns trigger other patterns because of the way the brain is wired, then we have a theory of thinking.

Among the people influenced by Hobbes was Leibniz... Leibniz took Hobbes literally when he said that reason is nothing but reckoning. He devoted much of his life to inventing a scheme that would perfect the computations underlying thought, turning arguments into calculations and making fallacies as obvious as errors in arithmetic...

The idea that intelligence arises from the manipulation of symbols by rules is a major doctrine of the school of thought called rationalism..." (WR97-8)

Posted by: sd at November 24, 2005 08:36 AM

 

 

so even if Hobbes isn't strictly in the camp, he helped pitch the tents.

Posted by: sd at November 24, 2005 09:02 AM

 

 

think this 'connectome' project is going to be very important:

www.indiana.edu/~cortex/connectome_plos.pdf

Posted by: sd at November 24, 2005 09:04 AM

 

 

this might be straying off topic a little, but wondering if Whitehead could be an interesting influence here. I know he's been bashed for process theology etc., but there's a lot of great stuff in there.
In particular, he almost sidesteps the question of intelligence by suggesting that it always has a goal, a neatness, an overcoding, and what is needed is to look underneath. The underground of intelligence, then, is replaced by the prehensions (non-subjective perception) of the extended, dynamic bodymind, in waves of intensity. Poss. fairly similar to Leibniz's microperception.

Posted by: hyperflow at November 24, 2005 09:52 AM

 

 

HOBBES
LEVIATHAN
CHAPTER V
OF REASON AND SCIENCE
WHEN man reasoneth, he does nothing else but conceive a sum total, from addition of parcels; or conceive a remainder, from subtraction of one sum from another: which, if it be done by words, is conceiving of the consequence of the names of all the parts, to the name of the whole; or from the names of the whole and one part, to the name of the other part. And though in some things, as in numbers, besides adding and subtracting, men name other operations, as multiplying and dividing; yet they are the same: for multiplication is but adding together of things equal; and division, but subtracting of one thing, as often as we can. These operations are not incident to numbers only, but to all manner of things that can be added together, and taken one out of another. For as arithmeticians teach to add and subtract in numbers, so the geometricians teach the same in lines, figures (solid and superficial), angles, proportions, times, degrees of swiftness, force, power, and the like; the logicians teach the same in consequences of words, adding together two names to make an affirmation, and two affirmations to make a syllogism, and many syllogisms to make a demonstration; and from the sum, or conclusion of a syllogism, they subtract one proposition to find the other. Writers of politics add together pactions to find men's duties; and lawyers, laws and facts to find what is right and wrong in the actions of private men. In sum, in what matter soever there is place for addition and subtraction, there also is place for reason; and where these have no place, there reason has nothing at all to do.

Out of all which we may define (that is to say determine) what that is which is meant by this word reason when we reckon it amongst the faculties of the mind. For reason, in this sense, is nothing but reckoning (that is, adding and subtracting) of the consequences of general names agreed upon for the marking and signifying of our thoughts; I say marking them, when we reckon by ourselves; and signifying, when we demonstrate or approve our reckonings to other men.

[Reckoning = elementary arithmetic operating on the material of sensation, from which all higher functions can be derived.]

Posted by: nick at November 24, 2005 10:07 AM

 

 

hyperflow - could be interesting, but my question is how useful is it to go back to philosophers or logicians? Dead philosophers are mainly useful for mapping the past, but I don't think they are particularly useful for the production of the future. I think we need to think about the current research (e.g. Sporns' connectome) and develop fresh concepts.

Posted by: sd at November 24, 2005 10:14 AM

 

 

en.wikipedia.org/wiki/Dead_reckoning
Dead reckoning is the process of estimating a global position of a vehicle by advancing a known position using course, speed, time and distance to be traveled. That is, in other words, figuring out where you momentarily are or where you will be at a certain time if you hold the speed, time and course you plan to travel.

HOBBES
LEVIATHAN
CHAPTER IV
OF SPEECH
So that without words there is no possibility of reckoning of numbers; much less of magnitudes, of swiftness, of force, and other things, the reckonings whereof are necessary to the being or well-being of mankind.

[The robot that thinks like you...] The dream is that these new brains, embedded in robotic bodies of silicon and steel, will go to a level beyond today's artificial intelligence systems. By sensing their environments as they explore and learn, they will develop the ability to survive in the constantly changing real world of imperfect information that we navigate so effortlessly, but which computers have yet to master. &etc

Posted by: northanger at November 24, 2005 11:12 AM

 

 

sd - ok, i understand what you're saying, but perhaps the extended bodymind has to be understood along the past-future continuum, where evolution is nonlinear, and the future can become past or vice-versa. we're not evolving within a linear (or exponential) growth chart.

also, thinking about concepts as productions, what is being produced, created, affected, what sensations or intensities are affected? - in work on nanotech, it doesn't seem enough to pit connectionism v.s. integration... at every turn it seems that a humanist overcoding takes place, where, as you say, age-old positions are transplanted into hyperfuturist thought-colonies. isn't it possible that whitehead/spinoza etc. can be reinvigorated at a nexus with these new intersections of thought?

Posted by: hyperflow at November 24, 2005 11:28 AM

 

 

hyperflow - "isn't it possible that whitehead/spinoza etc. can be reinvigorated at a nexus with these new intersections of thought?"

Of course it's possible - you have to be very careful tho.

Much of the history of philosophy is a history of speculation and positng which filled a void caused by an almost total absence of data. Liebniz, Spinoza and Whitehead simply lacked information - the motivation behind Sporns' connectome project is the fact that there is 'a severe lack of information' concerning brain networks. Maybe Leibniz and Spinoza have conceptual tools that can be applied to the contemporary understanding of the mind and brain, but I would tend to mistrust such an application: it's obvious that Leibniz and Spinoza would have developed completely different philosophies if they had had the kind of data available to us. Any philosphy that describes the mind and its operations without reference to the brain is practically useless. IMHO, philosophy should evolve in the same way that medicine does: you wouldn't trust the contents of a seventeeth century medicine cabinet, so extreme caution needs to be exercised when peering into a seventeenth century philosophical toolkit. Of course, the level of abstraction is decisive here - logic and mathematics have a high degree of fitness. The mind, however, is not an abstract entity, and neither is thought.

"age-old positions are transplanted into hyperfuturist thought-colonies" - good point. But what if the age-old positions are mutating into a new, hybrid consensus that actually works?

Posted by: sd at November 24, 2005 01:13 PM

 

 

"Any philosphy that describes the mind and its operations without reference to the brain is practically useless. "

Yes, I think you're right, and I understand where you're coming from here. The connectome project is fascinating, and the idea of concepts which work is essential to the idea of both philosophy and science as being pragmatic.

Just wondered if you'd read / what you think of Antonio Damasio, particularly his book 'Looking for Spinoza'?

Posted by: hyperflow at November 24, 2005 02:47 PM

 

 

Antonio Damasio certainly seems worth giving time to. This is sensible: "...what we really want to understand, the relation between brain systems and complex cognition and behavior, can only be explained satisfactorily by a comprehensive blend of theories and facts related to all the levels of organization of the nervous system, from molecules, and cells and circuits, to large-scale systems and physical and social environments. For almost any problem that is worth one's interest, theory and evidence from all of these levels are, in one way or another, relevant to the understanding of physiology or pathology. Since none of us can possibly practice or dominate knowledge across all of those levels, it follows that one must practice one or two very well, and be very humble about considering the rest, that is, evidence from those other levels that you do not practice. In other words, beware of explanations that rely on data from one single level, whatever the level may be."
hcs.harvard.edu/~husn/BRAIN/vol8-spring2001/damasio.htm

Divisions of labour and teamworking as the way to cope with data overload?

Posted by: sd at November 25, 2005 01:30 AM

 

 

Mission to build a simulated brain begins

"An effort to create the first computer simulation of the entire human brain, right down to the molecular level...

...It will be the first time humans will be able to observe the electrical code our brains use to represent the world, and to do so in real time, says Henry Markram, director of Brain and Mind Institute at the Ecole Polytecnique Fédérale de Lausanne (EPFL), Switzerland.

Until now this sort of undertaking would not be possible because the processing power and the scientific knowledge of how the brain is wired simply was not there, says Charles Peck, IBM’s lead researcher on the project.

“But there has been a convergence of the biological data and the computational resources,” he says. Efforts to map the brain’s circuits and the development of the Blue Gene supercomputer, which has a peak processing power of at least 22.8 teraflops, now make this possible...."

1/6/2005 New Scientist
www.newscientist.com/article.ns?id=dn7470

bluebrainproject.epfl.ch/

Posted by: sd at November 25, 2005 01:40 AM

 

 

"An effort to create the first computer simulation of the entire human brain, right down to the molecular level..." - seems like the Cyberpunk scenarios are still on track.
(Probably recommended this one before, but Greg Egan's 'Permutation City' starts with this topic (computer-simulation of a human brain) and deals with it in an especially fascinating way.)

More generally, this focus on the boundary between 'top-down' (arborescent) AI system and 'bottom-up' (rhizomic) network effects is definitely important. There's probably too much emphasis on polemical defences of 'pure' approaches in much of the intellectual background to these issues and insufficient awareness of the power of impurity. Boundary zones are typically hyper-productive (one of the few things pomo-oriented theorists may have got right).

Posted by: nick at November 25, 2005 05:12 AM

 

 

what are boundary zones?

Posted by: northanger at November 25, 2005 07:45 AM

 

 

northanger - maybe 'boundary zones' is too general. IMHO special attention is merited by the particular regions of hybridity where a relatively tightly organized regime is partially melted against radically decapitated or disorganized multiplicities (as in the example driving this thread, where formalized systems abut decoded populations). Structure vs chaos is less fertile than marginal destructuration at the edge of chaos.

Posted by: nick at November 25, 2005 08:40 AM

 

 

nick - you knock me right out of the water with that response. it's as clear as a ... fanged noumenon.

Posted by: northanger at November 25, 2005 09:28 AM

 

 

The Generative/Rationalist Extreme

In 'The Sound Pattern of English', Chomsky and Halle (1968) posited that the past tense of both regular and irregular verbs are generated by the application of rules.

Chomsky and Halle are on relatively safe ground with regular verbs because they share patterns easily accounted for by a couple of rules. They are inflected with the suffix ‘ed’ or ‘d’. They obey unbreakable phonological rules which generate /Id/ (e.g. wanted) /t/ (e.g. stopped) or /d/ (killed). New verbs entering the language fall into one of these three groups according to the phonological profile of their stem.

Rather than storing the past tense form of every regular verb in the memory, the brain merely has to apply one grammatical rule and phonological rule to produce the desired form.

The theorists used the shared patterns of irregular verbs to posit further generative rules. Irregular verbs are not randomly irregular: they share sounds both with their stems and with other verbs (e.g. 25 of the 164 irregular verbs share the i-a-u pattern found in sing-sang-sung, though there are variations in this group, as in sit-sat-sat). Chomsky and Halle used the tools of generative psychology to account for all the patterns found in the 164 irregular verbs with just a handful of phonological rules: e.g. shorten long vowels when they appear before a consonant cluster, as in keep-kept.

[N.B Pinker and Prince list 181 exceptions to regularity.]

“Verbs sit on a ‘continuum of productivity and generality that extends from affixation of the –ed suffix in decide-decided to total suppletion in go-went,’ with families like sing-sang, ring-rang, and bind-bound, wind wound in between. At one end of the continuum are the regular verbs, which are handled by a general rule that says nothing about the words it can apply to. At the other end of the continuum are suppletive verbs such as 'go' and 'went', which are simply listed as pairs. In between are the other irregulars, which are handled by a smaller set of rules, each tagged to apply to certain rules. (WR103)

However, Chomsky and Halle enter shakier grounds when it comes to irregular verbs. In their schema, phonological rules are central to the generation of past forms, but pronunciation changes: in the fifteenth century, the Great Vowel shift scrambled the long vowels in English: “Before the shift, keep had been pronounced something like cape, hide like heed, boot like boat. After the shift, the English spelling of the long vowels no longer made much sense, nor did the pairings of ‘short’ and ‘long’ vowels in siblings like keep and kept.” (WR73)

If the 'ee' in 'keep' is not a drawn out version of the 'e' in 'kept', then the rule that Chomsky and Halle derived from generative phonology does not apply to generate 'kept'. Being fully aware of this, Chomsky and Halle posited that each word has a deep structure that is unpronounceable or not-directly pronouncable. So the fifteenth century pronunciation of 'keep' as 'cape' is inferred by the mind every time it hears 'keep'.

Pinker points out that this is a highly unrealistic and uneconomical model from the point of psychology and language acquisition: “Children don’t hear underlying forms, and they are not provided with lessons about the rules that turn them into audible surface forms. They hear only the surface forms. If the rules and underlying forms are to play some role in mental life, children must infer the cascade of rules that generated the surface form, run it in reverse, and extract the underlying form. And the suggestion that English-speaking children hear 'run' and infer 'rin' or hear 'fight' and infer the German-sounding fēcht is, frankly, beyond belief.

First, why would the child bother if the rules are there only to generate the surface form, and the child already had the surface form?... And even if the child wanted to ferret out rules and underlying forms, how could they ever find the right ones if the crucial clues - the ones linguists themselves use to dicover the rules - are found in pairs of words the children will learn only in adulthood if ever, such as 'serene' and 'serenity', 'manager' and 'managerial', kinesis and 'kinetic'?” (WR112)

Posted by: sd at November 26, 2005 04:28 AM

 

 

Exploring the moral maze, 26 November 2005
Dan Jones, New Scientist.

"A TROLLEY train comes hurtling down the line, out of control. It is heading towards five people who are stuck on the track. If you do nothing they face certain death. But you have a choice: with the flick of a switch, you can divert the trolley down another line - a line on which only one person is stuck. What do you do? Perhaps, like most people, you believe that it is right to minimise the carnage, so you do the rational thing and flick that switch.

But what if the situation was slightly different? This time you are standing on a footbridge overlooking the track. The trolley is coming. The five people are still stuck, but there's no switch, no alternative route. All you've got is a hefty guy standing in front of you. If you push him onto the line, his bulk will be enough to stop the runaway trolley. You could sacrifice his life to save the others - one for five, the same as before. What do you do now? Suddenly the dilemma is transformed. If you respond the way most people do, you won't push the hapless fellow to his fate. The thought of actively killing someone, even for the greater good, just feels all wrong.

Two logically equivalent situations, yet two different outcomes. What is going on? For decades, this thought experiment has confounded philosophers and psychologists. They have long been split into two camps: one arguing that moral judgments arise from rational thought, the other that the roots of morality are emotional. But the trolley-train dilemma just doesn't fit this black-or-white way of thinking. Now, as the subject of morality moves from the philosopher's armchair into the lab, the error of this dichotomy is becoming clear. Researchers looking at the psychological basis of morality are finding that reason and emotion both play a part.

...Joshua Greene, a philosopher and cognitive scientist from Princeton University, and his colleagues are using brain-imaging techniques to get a handle on what goes on in the brain when we make moral choices. In particular, they have been looking at the trolley-train dilemma to see what the underlying difference in brain activity is when we decide to flick the switch compared with pushing the man. With the tools of modern brain imaging, Greene and co are beginning to provide an answer where philosophers have floundered.
Time to decide

Their functional magnetic resonance imaging studies suggest that the different situations elicit different brain responses. Given the choice to flick a switch, areas towards the front of the brain, associated with "executive" decision-making functions, become active, much as they do in any cost-benefit analysis. By contrast, when deciding whether or not to push a man to his death there appears to be a lot of activity in brain areas associated with rapid emotional responses. Throwing someone to their death is the sort of up-close-and-personal moral violation that the brain could well have evolved tools to deal with, explains Greene. By contrast, novel, abstract problems such as flicking a switch need a more logical analysis.

As well as using different brain areas in the footbridge scenario, people also take longer to make a decision - and longer still if they decide to push the man. There is evidence of an internal conflict as they consider taking a morally unpalatable action to promote the greater good. This shows up as increased activity in the anterior cingulate cortex, an area of the brain known to be activated in cognitive conflict. Following this, areas associated with cognitive control and the suppression of emotional responses also light up - with activity particularly marked in people who choose to push.

Greene believes this activity reflects the cognitive effort required to overcome the emotional aversion to harming others. He is currently working on variations of the trolley-train thought experiment to incorporate other moral issues, such as the role that promising not to harm a given individual might have in influencing decisions, and how this affects the underlying brain activity."

www.newscientist.com/channel/being-human/mg18825271.700;jsessionid=DBLAEAJFIFNH

Posted by: sd at November 26, 2005 04:35 AM

 

 

Emerging distinctions?:

1. applications and faculties programmed into the human brain as basic components of the operating system. For example: perception (interconnected data processing systems); the language faculty (locked into perception for input, recursion as the abstract potential, the input triggering a specific profile for innate, minimal grammatical and lexical machinery to apply to, locked into physiological adaptations for phonological output); instinctive cognition (rapid, unthinking assessment leading to swift response, default strategies tried and tested by ancestors and which selection pressure has made innate); emotional perceptions and responses programmed by evolutionarily stable strategies (e.g. a sense of fairness, inc*st taboo, disgust, fear)...

2. emergent networks which are trained and shaped by input, and which can be re-weighted: the grammar and lexis of a specific language; high level cognition involving rational assessment, deliberate cunning, complex risk assessment, long term planning etc. - the network develops through reflection on the individual's past experience and observations; culturally programmed emotional perceptions and responses (e.g. guilt trained in by religion) which manipulate or even overide innate emotional equipment (e.g. altruism which extends beyond looking after those who share genes, wasteful devotion which is counterproductive from the point of view of the genes).

Genes 'decide' when the networks come online. The timing and calibration of language acquisition, and the interconnected development and processes that make it possible provides one of the most complex and baffling assemblages of phenomena we can observe.

Food for thought: is sentience a network or an outcome of networks? Is the religious/spiritual sense an innate brain component? Research in neurotheology might shed light on this one day.

Posted by: sd at November 26, 2005 05:55 AM

 

 

sd - have you seen this yet:
www.theatlantic.com/doc/prem/200512/god-accident
?
Connects back to last thread, but key to God stuff ...

Posted by: Nick at November 26, 2005 09:51 AM

 

 

saw it but can't access it - don't have a subscription to The Atlantic (considering getting one). If you have one maybe you could a) tell me if it's worth it and/or b) post in excerpts from the article... ;)

Posted by: sd at November 26, 2005 10:00 AM

 

 

sd - that will teach me to try and make a coherent post while trying to bathe a howling infant.
There's been a lot of discussion of this piece - I'm relying on Kling's comments for orientation. Sounds more 'philosophical' than 'neurotheological' - but also actually more plausible and certainly more elegant. Misapplication of schemas evolved to process social relations to wider domains - i.e. attempt to 'socially-process' nature - leads into 'religious' errors: animism, spiritism, theism. A little simplistic no doubt, but also hard to imagine it's not broadly correct.

Posted by: Nick at November 26, 2005 10:39 AM

 

 

nick is referring to this, i think:

Why People Hate Economics
www.techcentralstation.com/112105A.html


Prometheus 6 everting @ God's Politics
www.alternet.org/wiretap/27745/?comments=view&cID=55473&pID=55298
And I do understand your point, that you should be able to live as you see fit, without imposed religious ritual. Now understand mine. When you live in the water, you're going to get wet. When you live in a world where +90% of the people profess one religion or another, you're going to see it in the shape of the society, the nature of the public rituals, the whole nine yards. That's not opinion...that's a physical fact. I can even give you a reference that you can incorporate in your arguments; it's in The Atlantic Monthly, the December issue. The intro is online for non-subscribers.

www.theatlantic.com/doc/prem/200512/god-accident

However you explain it, an intelligent progressive sees it's easier to support progressive ideas in the existing religious community than to change human nature.

(shh, don't tell)
p209.ezboard.com/finformedcitizenfrm30.showMessage?topicID=149.topic

Posted by: northanger at November 26, 2005 11:45 AM

 

 

Well here it is, in its entirety:

The Atlantic Monthly, December 2005
Is God an Accident?

by Paul Bloom


Despite the vast number of religions, nearly everyone in the world believes in the same things: the existence of a soul, an afterlife, miracles, and the divine creation of the universe. Recently psychologists doing research on the minds of infants have discovered two related facts that may account for this phenomenon. One: human beings come into the world with a predisposition to believe in supernatural phenomena. And two: this predisposition is an incidental by-product of cognitive functioning gone awry. Which leads to the question ...

I. God Is Not Dead

W hen I was a teenager my rabbi believed that the Lubavitcher Rebbe, who was living in Crown Heights, Brooklyn, was the Messiah, and that the world was soon to end. He believed that the earth was a few thousand years old, and that the fossil record was a consequence of the Great Flood. He could describe the afterlife, and was able to answer adolescent questions about the fate of Hitler's soul.

My rabbi was no crackpot; he was an intelligent and amiable man, a teacher and a scholar. But he held views that struck me as strange, even disturbing. Like many secular people, I am comfortable with religion as a source of spirituality and transcendence, tolerance and love, charity and good works. Who can object to the faith of Martin Luther King Jr. or the Dalai Lama—at least as long as that faith grounds moral positions one already accepts? I am uncomfortable, however, with religion when it makes claims about the natural world, let alone a world beyond nature. It is easy for those of us who reject supernatural beliefs to agree with Stephen Jay Gould that the best way to accord dignity and respect to both science and religion is to recognize that they apply to "non-overlapping magisteria": science gets the realm of facts, religion the realm of values.

For better or worse, though, religion is much more than a set of ethical principles or a vague sense of transcendence. The anthropologist Edward Tylor got it right in 1871, when he noted that the "minimum definition of religion" is a belief in spiritual beings, in the supernatural. My rabbi's specific claims were a minority view in the culture in which I was raised, but those sorts of views—about the creation of the universe, the end of the world, the fates of souls—define religion as billions of people understand and practice it.

The United States is a poster child for supernatural belief. Just about everyone in this country—96 percent in one poll—believes in God. Well over half of Americans believe in miracles, the devil, and angels. Most believe in an afterlife—and not just in the mushy sense that we will live on in the memories of other people, or in our good deeds; when asked for details, most Americans say they believe that after death they will actually reunite with relatives and get to meet God. Woody Allen once said, "I don't want to achieve immortality through my work. I want to achieve it through not dying." Most Americans have precisely this expectation.

But America is an anomaly, isn't it? These statistics are sometimes taken as yet another indication of how much this country differs from, for instance, France and Germany, where secularism holds greater sway. Americans are fundamentalists, the claim goes, isolated from the intellectual progress made by the rest of the world.

There are two things wrong with this conclusion. First, even if a gap between America and Europe exists, it is not the United States that is idiosyncratic. After all, the rest of the world—Asia, Africa, the Middle East—is not exactly filled with hard-core atheists. If one is to talk about exceptionalism, it applies to Europe, not the United States.

Second, the religious divide between Americans and Europeans may be smaller than we think. The sociologists Rodney Stark, of Baylor University, and Roger Finke, of Pennsylvania State University, write that the big difference has to do with church attendance, which really is much lower in Europe. (Building on the work of the Chicago-based sociologist and priest Andrew Greeley, they argue that this is because the United States has a rigorously free religious market, in which churches actively vie for parishioners and constantly improve their product, whereas European churches are often under state control and, like many government monopolies, have become inefficient.) Most polls from European countries show that a majority of their people are believers. Consider Iceland. To judge by rates of churchgoing, Iceland is the most secular country on earth, with a pathetic two percent weekly attendance. But four out of five Icelanders say that they pray, and the same proportion believe in life after death.

In the United States some liberal scholars posit a different sort of exceptionalism, arguing that belief in the supernatural is found mostly in Christian conservatives—those infamously described by the Washington Post reporter Michael Weisskopf in 1993 as "largely poor, uneducated, and easy to command." Many people saw the 2004 presidential election as pitting Americans who are religious against those who are not.

An article by Steven Waldman in the online magazine Slate provides some perspective on the divide:

"As you may already know, one of America's two political parties is extremely religious. Sixty-one percent of this party's voters say they pray daily or more often. An astounding 92 percent of them believe in life after death. And there's a hard-core subgroup in this party of super-religious Christian zealots. Very conservative on gay marriage, half of the members of this subgroup believe Bush uses too little religious rhetoric, and 51 percent of them believe God gave Israel to the Jews and that its existence fulfills the prophecy about the second coming of Jesus."

The group that Waldman is talking about is Democrats; the hard-core subgroup is African-American Democrats.

Finally, consider scientists. They are less likely than non-scientists to be religious—but not by a huge amount. A 1996 poll asked scientists whether they believed in God, and the pollsters set the bar high—no mealy-mouthed evasions such as "I believe in the totality of all that exists" or "in what is beautiful and unknown"; rather, they insisted on a real biblical God, one believers could pray to and actually get an answer from. About 40 percent of scientists said yes to a belief in this kind of God—about the same percentage found in a similar poll in 1916. Only when we look at the most elite scientists—members of the National Academy of Sciences—do we find a strong majority of atheists and agnostics.

These facts are an embarrassment for those who see supernatural beliefs as a cultural anachronism, soon to be eroded by scientific discoveries and the spread of cosmopolitan values. They require a new theory of why we are religious—one that draws on research in evolutionary biology, cognitive neuroscience, and developmental psychology.

II. Opiates and Fraternities

O ne traditional approach to the origin of religious belief begins with the observation that it is difficult to be a person. There is evil all around; everyone we love will die; and soon we ourselves will die—either slowly and probably unpleasantly or quickly and probably unpleasantly. For all but a pampered and lucky few life really is nasty, brutish, and short. And if our lives have some greater meaning, it is hardly obvious.

So perhaps, as Marx suggested, we have adopted religion as an opiate, to soothe the pain of existence. As the philosopher Susanne K. Langer has put it, man "cannot deal with Chaos"; supernatural beliefs solve the problem of this chaos by providing meaning. We are not mere things; we are lovingly crafted by God, and serve his purposes. Religion tells us that this is a just world, in which the good will be rewarded and the evil punished. Most of all, it addresses our fear of death. Freud summed it all up by describing a "three-fold task" for religious beliefs: "they must exorcise the terrors of nature, they must reconcile men to the cruelty of Fate, particularly as it is shown in death, and they must compensate them for the sufferings and privations which a civilized life in common has imposed on them."

Religions can sometimes do all these things, and it would be unrealistic to deny that this partly explains their existence. Indeed, sometimes theologians use the foregoing arguments to make a case for why we should believe: if one wishes for purpose, meaning, and eternal life, there is nowhere to go but toward God.

One problem with this view is that, as the cognitive scientist Steven Pinker reminds us, we don't typically get solace from propositions that we don't already believe to be true. Hungry people don't cheer themselves up by believing that they just had a large meal. Heaven is a reassuring notion only insofar as people believe such a place exists; it is this belief that an adequate theory of religion has to explain in the first place.

Also, the religion-as-opiate theory fits best with the monotheistic religions most familiar to us. But what about those people (many of the religious people in the world) who do not believe in an all-wise and just God? Every society believes in spiritual beings, but they are often stupid or malevolent. Many religions simply don't deal with metaphysical or teleological questions; gods and ancestor spirits are called upon only to help cope with such mundane problems as how to prepare food and what to do with a corpse—not to elucidate the Meaning of It All. As for the reassurance of heaven, justice, or salvation, again, it exists in some religions but by no means all. (In fact, even those religions we are most familiar with are not always reassuring. I know some older Christians who were made miserable as children by worries about eternal damnation; the prospect of oblivion would have been far preferable.) So the opiate theory is ultimately an unsatisfying explanation for the existence of religion.

The major alternative theory is social: religion brings people together, giving them an edge over those who lack this social glue. Sometimes this argument is presented in cultural terms, and sometimes it is seen from an evolutionary perspective: survival of the fittest working at the level not of the gene or the individual but of the social group. In either case the claim is that religion thrives because groups that have it outgrow and outlast those that do not.

In this conception religion is a fraternity, and the analogy runs deep. Just as fraternities used to paddle freshmen on the rear end to instill loyalty and commitment, religions have painful initiation rites—for example, snipping off part of the penis. Also, certain puzzling features of many religions, such as dietary restrictions and distinctive dress, make perfect sense once they are viewed as tools to ensure group solidarity.

The fraternity theory also explains why religions are so harsh toward those who do not share the faith, reserving particular ire for apostates. This is clear in the Old Testament, in which "a jealous God" issues commands such as:

"Should your brother, your mother's son, or your son or your daughter or the wife of your bosom or your companion who is like your own self incite you in secret, saying Let us go and worship other gods' ... you shall surely kill him. Your hand shall be against him first to put him to death and the hand of all the people last. And you shall stone him and he shall die, for he sought to thrust you away from the LORD your God who brought you out of the land of Egypt, from the house of slaves. —Deuteronomy 13, 7:11

This theory explains almost everything about religion—except the religious part. It is clear that rituals and sacrifices can bring people together, and it may well be that a group that does such things has an advantage over one that does not. But it is not clear why a religion has to be involved. Why are gods, souls, an afterlife, miracles, divine creation of the universe, and so on brought in? The theory doesn't explain what we are most interested in, which is belief in the supernatural.

III. Bodies and Souls

E nthusiasm is building among scientists for a quite different view—that religion emerged not to serve a purpose but by accident.

This is not a value judgment. Many of the good things in life are, from an evolutionary perspective, accidents. People sometimes give money, time, and even blood to help unknown strangers in faraway countries whom they will never see. From the perspective of one's genes this is disastrous—the suicidal squandering of resources for no benefit. But its origin is not magical; long-distance altruism is most likely a by-product of other, more adaptive traits, such as empathy and abstract reasoning. Similarly, there is no reproductive advantage to the pleasure we get from paintings or movies. It just so happens that our eyes and brains, which evolved to react to three-dimensional objects in the real world, can respond to two-dimensional projections on a canvas or a screen.

Supernatural beliefs might be explained in a similar way. This is the religion-as-accident theory that emerges from my work and the work of cognitive scientists such as Scott Atran, Pascal Boyer, Justin Barrett, and Deborah Kelemen. One version of this theory begins with the notion that a distinction between the physical and the psychological is fundamental to human thought. Purely physical things, such as rocks and trees, are subject to the pitiless laws of Newton. Throw a rock, and it will fly through space on a certain path; if you put a branch on the ground, it will not disappear, scamper away, or fly into space. Psychological things, such as people, possess minds, intentions, beliefs, goals, and desires. They move unexpectedly, according to volition and whim; they can chase or run away. There is a moral difference as well: a rock cannot be evil or kind; a person can.

Where does the distinction between the physical and the psychological come from? Is it something we learn through experience, or is it somehow pre-wired into our brains? One way to find out is to study babies. It is notoriously difficult to know what babies are thinking, given that they can't speak and have little control over their bodies. (They are harder to test than rats or pigeons, because they cannot run mazes or peck levers.) But recently investigators have used the technique of showing them different events and recording how long they look at them, exploiting the fact that babies, like the rest of us, tend to look longer at something they find unusual or bizarre.

This has led to a series of striking discoveries. Six-month-olds understand that physical objects obey gravity. If you put an object on a table and then remove the table, and the object just stays there (held by a hidden wire), babies are surprised; they expect the object to fall. They expect objects to be solid, and contrary to what is still being taught in some psychology classes, they understand that objects persist over time even if hidden. (Show a baby an object and then put it behind a screen. Wait a little while and then remove the screen. If the object is gone, the baby is surprised.) Five-month-olds can even do simple math, appreciating that if first one object and then another is placed behind a screen, when the screen drops there should be two objects, not one or three. Other experiments find the same numerical understanding in nonhuman primates, including macaques and tamarins, and in dogs.

Similarly precocious capacities show up in infants' understanding of the social world. Newborns prefer to look at faces over anything else, and the sounds they most like to hear are human voices—preferably their mothers'. They quickly come to recognize different emotions, such as anger, fear, and happiness, and respond appropriately to them. Before they are a year old they can determine the target of an adult's gaze, and can learn by attending to the emotions of others; if a baby is crawling toward an area that might be dangerous and an adult makes a horrified or disgusted face, the baby usually knows enough to stay away.

A skeptic might argue that these social capacities can be explained as a set of primitive responses, but there is some evidence that they reflect a deeper understanding. For instance, when twelve-month-olds see one object chasing another, they seem to understand that it really is chasing, with the goal of catching; they expect the chaser to continue its pursuit along the most direct path, and are surprised when it does otherwise. In some work I've done with the psychologists Valerie Kuhlmeier, of Queen's University, and Karen Wynn, of Yale, we found that when babies see one character in a movie help an individual and a different character hurt that individual, they later expect the individual to approach the character that helped it and to avoid the one that hurt it.

Understanding of the physical world and understanding of the social world can be seen as akin to two distinct computers in a baby's brain, running separate programs and performing separate tasks. The understandings develop at different rates: the social one emerges somewhat later than the physical one. They evolved at different points in our prehistory; our physical understanding is shared by many species, whereas our social understanding is a relatively recent adaptation, and in some regards might be uniquely human.

That these two systems are distinct is especially apparent in autism, a developmental disorder whose dominant feature is a lack of social understanding. Children with autism typically show impairments in communication (about a third do not speak at all), in imagination (they tend not to engage in imaginative play), and most of all in socialization. They do not seem to enjoy the company of others; they don't hug; they are hard to reach out to. In the most extreme cases children with autism see people as nothing more than objects—objects that move in unpredictable ways and make unexpected noises and are therefore frightening. Their understanding of other minds is impaired, though their understanding of material objects is fully intact.

At this point the religion-as-accident theory says nothing about supernatural beliefs. Babies have two systems that work in a cold-bloodedly rational way to help them anticipate and understand—and, when they get older, to manipulate—physical and social entities. In other words, both these systems are biological adaptations that give human beings a badly needed head start in dealing with objects and people. But these systems go awry in two important ways that are the foundations of religion. First, we perceive the world of objects as essentially separate from the world of minds, making it possible for us to envision soulless bodies and bodiless souls. This helps explain why we believe in gods and an afterlife. Second, as we will see, our system of social understanding overshoots, inferring goals and desires where none exist. This makes us animists and creationists.

IV. Natural-born dualists

For those of us who are not autistic, the separateness of these two mechanisms, one for understanding the physical world and one for understanding the social world, gives rise to a duality of experience. We experience the world of material things as separate from the world of goals and desires. The biggest consequence has to do with the way we think of ourselves and others. We are dualists; it seems intuitively obvious that a physical body and a conscious entity—a mind or soul—are genuinely distinct. We don't feel that we are our bodies. Rather, we feel that we occupy them, we possess them, we own them.

This duality is immediately apparent in our imaginative life. Because we see people as separate from their bodies, we easily understand situations in which people's bodies are radically changed while their personhood stays intact. Kafka envisioned a man transformed into a gigantic insect; Homer described the plight of men transformed into pigs; in Shrek 2 an ogre is transformed into a human being, and a donkey into a steed; in Star Trek a scheming villain forcibly occupies Captain Kirk's body so as to take command of the Enterprise; in The Tale of the Body Thief, Anne Rice tells of a vampire and a human being who agree to trade bodies for a day; and in 13 Going on 30 a teenager wakes up as thirty-year-old Jennifer Garner. We don't think of these events as real, of course, but they are fully understandable; it makes intuitive sense to us that people can be separated from their bodies, and similar transformations show up in religions around the world.

This notion of an immaterial soul potentially separable from the body clashes starkly with the scientific view. For psychologists and neuroscientists, the brain is the source of mental life; our consciousness, emotions, and will are the products of neural processes. As the claim is sometimes put, The mind is what the brain does. I don't want to overstate the consensus here; there is no accepted theory as to precisely how this happens, and some scholars are skeptical that we will ever develop such a theory. But no scientist takes seriously Cartesian dualism, which posits that thinking need not involve the brain. There is just too much evidence against it.

Still, it feels right, even to those who have never had religious training, and even to young children. This became particularly clear to me one night when I was arguing with my six-year-old son, Max. I was telling him that he had to go to bed, and he said, "You can make me go to bed, but you can't make me go to sleep. It's my brain!" This piqued my interest, so I began to ask him questions about what the brain does and does not do. His answers showed an interesting split. He insisted that the brain was involved in perception—in seeing, hearing, tasting, and smelling—and he was adamant that it was responsible for thinking. But, he said, the brain was not essential for dreaming, for feeling sad, or for loving his brother. "That's what I do," Max said, "though my brain might help me out."

Max is not unusual. Children in our culture are taught that the brain is involved in thinking, but they interpret this in a narrow sense, as referring to conscious problem solving, academic rumination. They do not see the brain as the source of conscious experience; they do not identify it with their selves. They appear to think of it as a cognitive prosthesis—there is Max the person, and then there is his brain, which he uses to solve problems just as he might use a computer. In this commonsense conception the brain is, as Steven Pinker puts it, "a pocket PC for the soul."

If bodies and souls are thought of as separate, there can be bodies without souls. A corpse is seen as a body that used to have a soul. Most things—chairs, cups, trees—never had souls; they never had will or consciousness. At least some nonhuman animals are seen in the same way, as what Descartes described as "beast-machines," or complex automata. Some artificial creatures, such as industrial robots, Haitian zombies, and Jewish golems, are also seen as soulless beings, lacking free will or moral feeling.

Then there are souls without bodies. Most people I know believe in a God who created the universe, performs miracles, and listens to prayers. He is omnipotent and omniscient, possessing infinite kindness, justice, and mercy. But he does not in any literal sense have a body. Some people also believe in lesser noncorporeal beings that can temporarily take physical form or occupy human beings or animals: examples include angels, ghosts, poltergeists, succubi, dybbuks, and the demons that Jesus so frequently expelled from people's bodies.

This belief system opens the possibility that we ourselves can survive the death of our bodies. Most people believe that when the body is destroyed, the soul lives on. It might ascend to heaven, descend to hell, go off into some sort of parallel world, or occupy some other body, human or animal. Indeed, the belief that the world teems with ancestor spirits—the souls of people who have been liberated from their bodies through death—is common across cultures. We can imagine our bodies being destroyed, our brains ceasing to function, our bones turning to dust, but it is harder—some would say impossible—to imagine the end of our very existence. The notion of a soul without a body makes sense to us.

Others have argued that rather than believing in an afterlife because we are dualists, we are dualists because we want to believe in an afterlife. This was Freud's position. He speculated that the "doctrine of the soul" emerged as a solution to the problem of death: if souls exist, then conscious experience need not come to an end. Or perhaps the motivation for belief in an afterlife is cultural: we believe it because religious authorities tell us that it is so, possibly because it serves the interests of powerful leaders to control the masses through the carrot of heaven and the stick of hell. But there is reason to favor the religion-as-accident theory.

In a significant study the psychologists Jesse Bering, of the University of Arkansas, and David Bjorklund, of Florida Atlantic University, told young children a story about an alligator and a mouse, complete with a series of pictures, that ended in tragedy: "Uh oh! Mr. Alligator sees Brown Mouse and is coming to get him!" [The children were shown a picture of the alligator eating the mouse.] "Well, it looks like Brown Mouse got eaten by Mr. Alligator. Brown Mouse is not alive anymore."

The experimenters asked the children a set of questions about the mouse's biological functioning—such as "Now that the mouse is no longer alive, will he ever need to go to the bathroom? Do his ears still work? Does his brain still work?"—and about the mouse's mental functioning, such as "Now that the mouse is no longer alive, is he still hungry? Is he thinking about the alligator? Does he still want to go home?"

As predicted, when asked about biological properties, the children appreciated the effects of death: no need for bathroom breaks; the ears don't work, and neither does the brain. The mouse's body is gone. But when asked about the psychological properties, more than half the children said that these would continue: the dead mouse can feel hunger, think thoughts, and have desires. The soul survives. And children believe this more than adults do, suggesting that although we have to learn which specific afterlife people in our culture believe in (heaven, reincarnation, a spirit world, and so on), the notion that life after death is possible is not learned at all. It is a by-product of how we naturally think about the world.

V. We've Evolved to be Creationists

This is just half the story. Our dualism makes it possible for us to think of supernatural entities and events; it is why such things make sense. But there is another factor that makes the perception of them compelling, often irresistible. We have what the anthropologist Pascal Boyer has called a hypertrophy of social cognition. We see purpose, intention, design, even when it is not there.

In 1944 the social psychologists Fritz Heider and Mary-Ann Simmel made a simple movie in which geometric figures—circles, squares, triangles—moved in certain systematic ways, designed to tell a tale. When shown this movie, people instinctively describe the figures as if they were specific types of people (bullies, victims, heroes) with goals and desires, and repeat pretty much the same story that the psychologists intended to tell. Further research has found that bounded figures aren't even necessary—one can get much the same effect in movies where the "characters" are not single objects but moving groups, such as swarms of tiny squares.

Stewart Guthrie, an anthropologist at Fordham University, was the first modern scholar to notice the importance of this tendency as an explanation for religious thought. In his book Faces in the Clouds, Guthrie presents anecdotes and experiments showing that people attribute human characteristics to a striking range of real-world entities, including bicycles, bottles, clouds, fire, leaves, rain, volcanoes, and wind. We are hypersensitive to signs of agency—so much so that we see intention where only artifice or accident exists. As Guthrie puts it, the clothes have no emperor.

Our quickness to over-read purpose into things extends to the perception of intentional design. People have a terrible eye for randomness. If you show them a string of heads and tails that was produced by a random-number generator, they tend to think it is rigged—it looks orderly to them, too orderly. After 9/11 people claimed to see Satan in the billowing smoke from the World Trade Center. Before that some people were stirred by the Nun Bun, a baked good that bore an eerie resemblance to Mother Teresa. In November of 2004 someone posted on eBay a ten-year-old grilled cheese sandwich that looked remarkably like the Virgin Mary; it sold for $28,000. (In response pranksters posted a grilled cheese sandwich bearing images of the Olsen twins, Mary-Kate and Ashley.) There are those who listen to the static from radios and other electronic devices and hear messages from dead people—a phenomenon presented with great seriousness in the Michael Keaton movie White Noise. Older readers who lived their formative years before CDs and MPEGs might remember listening intently for the significant and sometimes scatological messages that were said to come from records played backward.

Sometimes there really are signs of nonrandom and functional design. We are not being unreasonable when we observe that the eye seems to be crafted for seeing, or that the leaf insect seems colored with the goal of looking very much like a leaf. The evolutionary biologist Richard Dawkins begins The Blind Watchmaker by conceding this point: "Biology is the study of complicated things that give the appearance of having been designed for a purpose." Dawkins goes on to suggest that anyone before Darwin who did not believe in God was simply not paying attention.

Darwin changed everything. His great insight was that one could explain complex and adaptive design without positing a divine designer. Natural selection can be simulated on a computer; in fact, genetic algorithms, which mimic natural selection, are used to solve otherwise intractable computational problems. And we can see natural selection at work in case studies across the world, from the evolution of beak size in Galápagos finches to the arms race we engage in with many viruses, which have an unfortunate capacity to respond adaptively to vaccines.

Richard Dawkins may well be right when he describes the theory of natural selection as one of our species' finest accomplishments; it is an intellectually satisfying and empirically supported account of our own existence. But almost nobody believes it. One poll found that more than a third of college undergraduates believe that the Garden of Eden was where the first human beings appeared. And even among those who claim to endorse Darwinian evolution, many distort it in one way or another, often seeing it as a mysterious internal force driving species toward perfection. (Dawkins writes that it appears almost as if "the human brain is specifically designed to misunderstand Darwinism.") And if you are tempted to see this as a red state—blue state issue, think again: although it's true that more Bush voters than Kerry voters are creationists, just about half of Kerry voters believe that God created human beings in their present form, and most of the rest believe that although we evolved from less-advanced life forms, God guided the process. Most Kerry voters want evolution to be taught either alongside creationism or not at all.

What's the problem with Darwin? His theory of evolution does clash with the religious beliefs that some people already hold. For Jews and Christians, God willed the world into being in six days, calling different things into existence. Other religions posit more physical processes on the part of the creator or creators, such as vomiting, procreation, masturbation, or the molding of clay. Not much room here for random variation and differential reproductive success.

But the real problem with natural selection is that it makes no intuitive sense. It is like quantum physics; we may intellectually grasp it, but it will never feel right to us. When we see a complex structure, we see it as the product of beliefs and goals and desires. Our social mode of understanding leaves it difficult for us to make sense of it any other way. Our gut feeling is that design requires a designer—a fact that is understandably exploited by those who argue against Darwin.

It's not surprising, then, that nascent creationist views are found in young children. Four-year-olds insist that everything has a purpose, including lions ("to go in the zoo") and clouds ("for raining"). When asked to explain why a bunch of rocks are pointy, adults prefer a physical explanation, while children choose a functional one, such as "so that animals could scratch on them when they get itchy." And when asked about the origin of animals and people, children tend to prefer explanations that involve an intentional creator, even if the adults raising them do not. Creationism—and belief in God—is bred in the bone.

VI. Religion and Science Will Always Clash

S ome might argue that the preceding analysis of religion, based as it is on supernatural beliefs, does not apply to certain non-Western faiths. In his recent book, The End of Faith, the neuroscientist Sam Harris mounts a fierce attack on religion, much of it directed at Christianity and Islam, which he criticizes for what he sees as ridiculous factual claims and grotesque moral views. But then he turns to Buddhism, and his tone shifts to admiration—it is "the most complete methodology we have for discovering the intrinsic freedom of consciousness, unencumbered by any dogma." Surely this religion, if one wants to call it a religion, is not rooted in the dualist and creationist views that emerge in our childhood.

Fair enough. But while it may be true that "theologically correct" Buddhism explicitly rejects the notions of body-soul duality and immaterial entities with special powers, actual Buddhists believe in such things. (Harris himself recognizes this; at one point he complains about the millions of Buddhists who treat the Buddha as a Christ figure.) For that matter, although many Christian theologians are willing to endorse evolutionary biology—and it was legitimately front-page news when Pope John Paul II conceded that Darwin's theory of evolution might be correct—this should not distract us from the fact that many Christians think evolution is nonsense.

Or consider the notion that the soul escapes the body at death. There is little hint of such an idea in the Old Testament, although it enters into Judaism later on. The New Testament is notoriously unclear about the afterlife, and some Christian theologians have argued, on the basis of sources such as Paul's letters to the Corinthians, that the idea of a soul's rising to heaven conflicts with biblical authority. In 1999 the pope himself cautioned people to think of heaven not as an actual place but, rather, as a form of existence—that of being in relation to God.

Despite all this, most Jews and Christians, as noted, believe in an afterlife—in fact, even people who claim to have no religion at all tend to believe in one. Our afterlife beliefs are clearly expressed in popular books such as The Five People You Meet in Heaven and A Travel Guide to Heaven. As the Guide puts it,

"Heaven is dynamic. It's bursting with excitement and action. It's the ultimate playground, created purely for our enjoyment, by someone who knows what enjoyment means, because He invented it. It's Disney World, Hawaii, Paris, Rome, and New York all rolled up into one. And it's forever! Heaven truly is the vacation that never ends."

(This sounds a bit like hell to me, but it is apparently to some people's taste.)

Religious authorities and scholars are often motivated to explore and reach out to science, as when the pope embraced evolution and the Dalai Lama became involved with neuroscience. They do this in part to make their world view more palatable to others, and in part because they are legitimately concerned about any clash with scientific findings. No honest person wants to be in the position of defending a view that makes manifestly false claims, so religious authorities and scholars often make serious efforts toward reconciliation—for instance, trying to interpret the Bible in a way that is consistent with what we know about the age of the earth.

If people got their religious ideas from ecclesiastical authorities, these efforts might lead religion away from the supernatural. Scientific views would spread through religious communities. Supernatural beliefs would gradually disappear as the theologically correct version of a religion gradually became consistent with the secular world view. As Stephen Jay Gould hoped, religion would stop stepping on science's toes.

But this scenario assumes the wrong account of where supernatural ideas come from. Religious teachings certainly shape many of the specific beliefs we hold; nobody is born with the idea that the birthplace of humanity was the Garden of Eden, or that the soul enters the body at the moment of conception, or that martyrs will be rewarded with sexual access to scores of virgins. These ideas are learned. But the universal themes of religion are not learned. They emerge as accidental by-products of our mental systems. They are part of human nature.

Paul Bloom, a professor of psychology and linguistics at Yale, is the author of Descartes' Baby: How the Science of Child Development Explains What Makes Us Human and How Children Learn the Meanings of Words.

Posted by: sd at November 26, 2005 03:52 PM

 

 

and here's an interview with Paul Bloom

'Wired for Creationism'
www.theatlantic.com/doc/200511u/paul-bloom

[if there are problems with this link I'll paste it in]

Posted by: sd at November 26, 2005 04:04 PM

 

 

sd - brilliantly done (didn't realize you were a skilled hacker, now we just have to wait for the lawyers to arrive)

seems that weird metaphysical ideas about the brain are quite central to all this, which brings things back to the Cyberpunk insight: when people reach the level of street-technology where brain manipulation becomes ordinary, huge cultural transitions can be expected. Guess this also pulls neurotheology back in, although it still seems to me Bloom's wired-for-dualism account is more directly helpful than the wired-for-weird(-experiences) approach focused on by those electro-stimming the temporal lobe ...

Also worth noting, 'physical' multiplicities do animate and demonize, crossing over into functional modes susceptible to social-type comprehension and making the voodoo-in-cyberspace model of regenerated bush-religion impossible to dismiss out of hand ... elaborate cosmo-techno-theologies probably have a relatively long shelf-life too

Posted by: nick at November 27, 2005 01:49 AM

 

 

en.wikipedia.org/wiki/The_Origin_of_Consciousness_in_the_Breakdown_of_the_Bicameral_Mind
Julian Jaynes proposed in 1976 that human brains existed in a bicameral state until as recently as 3000 years ago in his work The Origin of Consciousness in the Breakdown of the Bicameral Mind. Jaynes asserts that until the times written about in Homer's Iliad, humans did not have the "interior monologue" that is characteristic of consciousness as most people experience it today. Jaynes believes that the bicameral mental commands were at some point believed to be issued by "gods"—so often recorded in ancient myths, legends and historical accounts—were in fact emanating from individuals' own minds.

Posted by: northanger at November 27, 2005 07:07 AM

 

 

northanger - interesting ref., but a quite different and more ambitious theory. Jaynes argues religions stems from inner voices (relates to religious content of psychoses). Bloom doesn't need anything of this to make his point, which is more 'Kantian' - based on misapplication of categories. (IMHO both Bloom and Jaynes make significant and realistic contributions to understanding the phenomenon).

Posted by: Nick at November 27, 2005 10:48 AM

 

 

Post a comment:










Remember personal info?