November 23, 2005

Trees & Rhizomes

The age-old debate between rationalism and empiricism is unresolved and ongoing. Contrary to the popular conception of philosophical clashes, the debate is actually about something and, perhaps even more surprisingly, concordance could come through attempting to find a solution to a very practical problem: the creation of strong AI.

In the past, the conflict was restricted to treatise bombardments in the lofty heights of philosophy. Now it is being decided down on the ground, less dramatically, by cognitive scientists and neuroscientists who patiently train and feed neural nets in cognitive science labs, and by neurologists and neuropsychologists who test the linguistic output of people who have suffered tragic damage to their brains.

Key texts online:
The Past Tense Debate (Pinker & Ullman; McClelland & Patterson 2002) - referred to as PTD
Words & Rules (Pinker 1998)
On Language & Connectionism (Pinker & Prince 1988)
Stanford Encyclopedia Entry: Connectionism

Other references:
Words & Rules, Pinker (1999), WR
The Singularity is Near, Kurzweil (2005) TS

In Words and Rules, Pinker maps the battlefront of the modern conflict: in the rational camp sit Leibniz, Descartes Hobbes, Humboldt and Chomsky; in the empirical camp sit Hume, Locke, Pavlov, Skinner and the connectionists David Rumelhart and James McClelland:

“The idea that intelligence arises from the manipulation of symbols by rules… When the symbols stand for words and the rules arrange them into phrases and sentences, we have grammar… When the symbols stand for concepts and the rules string them into chains of inference, we have logic, which became the basis for digital computers, the artificial intelligence systems that run on them, and many models of human cognition.” (98 WR)

“The mind connects things that are experienced together or that look alike… and generalizes to new objects according to their resemblance to known ones. Just as the rationalists were obsessed by combinatorial grammar, the associationists were obsessed by memorized words… John Locke pointed to the arbitrary connection between words and things as the quintessential example of how the mind forms associations by contiguity in time... Replace the ideas with ‘neurons’ and the associations with ‘connections’ and you get the connectionism of David Rumelhart and James McClelland.” (99WR)

Interestingly, Pinker and Kurzweil occupy a middle ground between these two polarities.

Pinker: Words & Rules

Pinker is strongly allied to the innatist side of the fence, but has adapted and streamlined his model of language learning and language processing to accommodate evidence provided by the Rumelhart-McClelland connectionist model. Pinker’s ‘word and rules’ (WR) hypothesis is a compromise which adapted in response to data from empirical tests.

The Rumelhart and McClelland Parallel Distributed Processing (PDP) model proved the extent to which blank neural nets can be trained to learn and generalize from previous input and feedback when presented with fresh data:

“Rumelhart and McClelland trained their network on a list of 420 verbs presented 200 times, for a total of 84,000 trials. To everyone’s surprise, the model did quite well, computing most of the correct sound stretches for all 420 verbs. That meant that a single set of connection strengths was able to convert 'look' to 'looked', 'seem' to 'seemed', 'melt' to 'melted', 'hit' to 'hit', 'make' to 'made', 'sing' to 'sang' and even 'go' to 'went'. Then Rumelhart and McClelland challenged the network with 86 new verbs, which it had not been trained on… The model offered the correct past-tense form with –ed for about three quarters of the new regular verbs, and made reasonable overgeneralization errors such as 'catched' and 'digged' for most of the new irregulars.

Even more impressively, the model mimicked some of the tendencies of children as they acquire English. At one point in training it produced errors such as 'gived' of verbs that it had previously produced correctly. It also analogized new irregular verbs to families of similar sounding old irregular verbs; for example it guessed 'cling-clung', 'sip-sept', 'slip-slept', 'bid-bid' and 'kid-kid'…” (WR120-1)

However, amongst other weaknesses, the PDP model is limited by the fact that all it does is associate sounds with sounds, which means it has great problems processing words which have unfamiliar sounds (the network produced membled as the past tense of mail because it was not familiar with ail), whereas humans quite happily apply the –ed ending to produce the past tense for new verbs, as long as they know the word concerned is a verb (e.g. no hesitation in turning 'text' into 'texted'). Also, having no mental symbols for morphological units, such as prefix, verb stem or suffix, the PDP model is unable to apply recursive rules, such as “a stem can combine with a prefix to form a new stem”, so that 'out' can combine with 'strip' to produce 'outstrip'. Positing symbolic tree structures and innate grammatical machinery is a more plausible and economical way of accounting for regularity in language and for its acquisition:

“The phonemes are held in their correct order by a treelike scaffolding that embodies the morphological structure of the word (how it is built out of stems, prefixes and suffixes) and the phonological structure of its parts (how they are built out of chunks like onsets, rimes, vowel nuclei, consonants and vowels, and ultimately features). The similarity to other words such as strip, restrip, trip, rip and tip falls mechanically out of the fact that they have identical subtrees, such as an identical ‘stem’ or an identical ‘rime.’ And computing the regular past-tense form is nothing but attaching a suffix next to the symbol ‘verb stem’: 'outstripped'. [there should be a nice Chomsky tree diagram here, but I can't paste it in.]

The WR theory is a “lexicalist compromise between the generative and connectionist extremes.” (PTD2)

“Regular verbs are computed by a rule that combines a symbol for a verb stem with a symbol for the suffix. Irregular verbs are pairs of words retrieved from the mental dictionary, a part of memory. Here is the twist: Memory is not a list of unrelated slots, like RAM in a computer, but is associative, a bit like the Rumelhart-McClelland pattern associator memory. Not only are words linked to words, but bits of words are linked to bits of words… The prediction is that regular and irregular inflection are psychologically, and ultimately, neurologically distinguishable.” (WR 131-2)

The WR hypothesis is now itself being subjected to rigorous testing. Possible proof that the brain handles regular and irregular verbs in different areas and by different operations might come from studies of people who suffer from aphasia and anomia. The former can result from damage to the areas around the Sylvian fissure and Broca's area, and causes agrammatism, whereas anomia is “a difficulty in retrieving and recognizing words,” which results from damage to the posterior parts of the brain (WR 275-6). Studies of Alzheimer’s disease, Parkinson’s disease and the Specific Language Impairment caused by the FOXP2 mutant gene seem to support to the WR hypothesis, but McClelland and Patterson question the evidence. (PTD15)

What is clear from this is that paper and pencil analysis is now utterly insufficient on it own: the debate increasingly concerns interpretation of data and critiques of data gathering methods.

[for more on FOXP2: www.well.ox.ac.uk/~simon/SPCH1/SPCH1_project.shtml]

Kurzweil

For Kurzweil, neural nets are just one tool in the vast panoply of technologies aiding and abetting the creation of strong AI. The capacity of well-trained neural nets to learn and self-organize is one of the promises they hold out. Neural nets are distinctly rhizomatic in the way that they bootstrap from the bottom up.

“The key to a neural net… is that it must learn its subject matter. Like the mammalian brains on which it is loosely modeled, a neural net starts out ignorant. The neural net’s teacher – which may be a human, a computer program, or perhaps another, more mature neural net that has already learned its lessons – rewards the neural net when it generates the right input and punishes it when it does not. This feedback is in turn used by the student neural net to adjust the strengths of each interneuronal connection. Connections that were consistent with the right answer are made stronger. Those that advocated a wrong answer are weakened. Over time, the neural net organizes itself to provide the right answers without coaching. Experiments have shown that neural nets can learn their subject matter even with unreliable teachers. If the teacher is correct only 60 percent of the time, the student neural net will still learn its lessons.

A powerful, well-taught neural net can emulate a wide range of human pattern-recognition faculties. Systems using multilayer neural nets have shown impressive results in a wide variety of pattern-recognition tasks, including recognizing handwriting, human faces, fraud in commercial transactions such as credit-card charges, and many others. In my own experience in using neural nets in such contexts, the most challenging engineering task is not coding the nets but in providing automated lessons for them to learn their subject matter.” (TS 271)

The other promise is that of parallel processing:

“Neural nets are also naturally amenable to parallel processing, since that is how the brain works. The human brain does not have a central processor that simulates each neuron. Rather, we can consider each neuron and each interneuronal connection to be an individual slow processor. Extensive work is under way to develop specialized chips that implement neural net-architectures in parallel to provide substantially greater throughput.” (TS270)

Kurzweil’s critique of Searle’s Chinese Room argument appeals to the variety of techniques which can be used in computing:

“A failure to see that computing processes are capable of being – just like the human brain – chaotic, unpredictable, messy, tentative, and emergent is behind much of the criticism of the prospect of intelligent machines that we hear from Searle and other essentially materialist philosophers. Inevitably Searle comes back to a criticism of ‘symbolic’ computing: that orderly sequential symbolic processes cannot recreate true thinking. I think that is correct (depending on what level we are modeling an intelligent process), but the manipulation of symbols (in the sense that Searle implies) is not the only way to build machines, or computers.
… Nonbiological entities can also use the emergent self-organizing paradigm, which is a trend which is well under way and one that will become even more important over the next several decades…
… The primary computing techniques that we have used in pattern-recognition systems do not use symbol manipulation but rather self-organizing methods… A machine that could really do what Searle describes in the Chinese Room argument would not merely be manipulating language symbols, because that approach doesn’t work. This is at the heart of the philosophical sleight of hand underlying the Chinese Room. The nature of computing is not limited to manipulating logic symbols. Something is going on in the human brain, and there is nothing that prevents these biological processes from being reverse engineered and replicated in nonbiological entities…
… Of course, neurotransmitter concentrations and other neural details have no meaning in and of themselves. The meaning and understanding that emerge in the human brain are exactly that: an emergent property of its complex patterns of activity. The same is true for machines. Although ‘shuffling symbols’ does not have meaning in and of itself, the emergent patterns have the same potential role in nonbiological systems as they do in biological systems such as the brain. Hans Moravec has written, ‘Searle is looking for understanding in the wrong places…[He] seemingly cannot accept that real meaning can exist in mere patterns.’” (TS460-4)

The most plausible and workable models for cognition and language are emerging through syntheses, which can be seen as part of a more far-ranging tendency towards consilience.

Whilst a certain degree of consensus is emerging, there are forks ahead and different directions are being taken. One camp is reverse engineering the human brain in order to better understand human psychology: the primary aim is to discover fundamental truths about ourselves. The camp which has put all its eggs in the strong AI basket is more concerned with what is useful than what is true: the fundamental aim is to overcome the limitations that keep us imprisoned in what we are and prevent us from becoming what we could become. Both trajectories lead inexorably to political implications, the surface of which have barely been scratched.

Reverse engineering will also inevitably lead to clashes over human nature. Evolutionary psychology stresses that the most complex and mysterious components of the human brain, the emotions, are adaptations which evolved over millions of years of gradual fine-tuning. In contrast, no sooner does Kurzweil find out about the deep interconnectedness of spindle cells, which are intimately involved with the emotions, than he notes how few they are in number and puts them on the list of things to be reverse engineered and simulated in the next couple of decades: “It will be difficult… to reverse engineer the exact methods of the spindle cells until we have better models of the many other regions to which they connect. However, it is remarkable how few neurons appear to be exclusively involved with these emotions… only about eighty thousand spindle cells dealing with high-level emotions.” (TS194)

Kurzweil’s glaring weakness, and the source of his irrepressible optimism, is that he grossly underestimates the sophistication and intransigence of evolutionary programming and strategies.

The debate is still unfolding, rather than raging, with a surprising degree of politeness (the Pinker vs. McClelland clash is extremely civilized). Tools and results change hands in the process of fine tuning. As the brain is precision re-engineered there is increasing cross-feed from different, previously opposed or unrelated disciplines. At the moment there is no no-mans land: the midpoint between rationalism and empiricism is actually a zone of constructive research and innovation, at least at the moment.

November 22, 2005

Decline of the West?

Spenglerian musings from Leon H over at Red State, with a stimulating comment thread.
And don't miss Paul J Cella's response.

Fairly confident we're not on the same wavelength over here, but this (from Leon H) definitely hit the spot :
"I’ve been thorougly dismayed throughout the ensuing year with the ridiculous amount of hand-holding that our society apparently requires."

Posted by CCRU-Shanghai at 11:04 AM | On-topic (5) | TrackBack

November 11, 2005

Intuitive Economics

"Behavioral economics has demonstrated systematic decision-making biases in both lab and field data. But are these biases learned or innate? We investigate this question using experiments on a novel set of subjects -- capuchin monkeys. By introducing a fiat currency and trade to a capuchin colony, we are able to recover their preferences over a wide range of goods and risky choices. We show that standard price theory does a remarkably good job of describing capuchin purchasing behavior; capuchin monkeys react rationally to both price and wealth shocks. However, when capuchins are faced with more complex choices including risky gambles, they display many of the hallmark biases of human behavior, including reference-dependent choices and loss-aversion. Given that capuchins demonstrate little to no social learning and lack experience with abstract gambles, these results suggest that certain biases such as loss-aversion are an innate function of how our brains code experiences, rather than learned behavior or the result of misapplied heuristics."

Keith Chen: The Evolution of Our Preferences - Evidence from Capuchin-Monkey trading behavior

"…The first surprise was just how readily they took to the idea of money. Despite the fact that capuchins do not usually display social learning – picking ups skills from other members of the group – it took just a few months for Chen and his colleagues to teach them that small discs could be used to buy treats. The monkey’s appreciation for money even extends to trying to counterfeit it – by using slices of cucumber instead – and hiding their own stash, suggesting that they understand it has intrinsic worth. In these respects capuchins seem to have innate economic wisdom much like our own."

Mark Buchanan: Monkey and Monkey Business, New Scientist, 5 November 2005

"During the chaos in the monkey cage, Chen saw something out of the corner of his eye that he would later try to play down but in his heart of hearts he knew to be true. What he witnessed was probably the first observed exchange of money for sex in the history of monkeykind. (Further proof that the monkeys truly understood money: the monkey who was paid for sex immediately traded the token in for a grape.)"

Monkey Business: Keith Chen's Monkey Research
By Stephen J. Dubner and Steven D. Levitt , Freakanomics

[excerpts from the Mark Buchanan article in New Scientist – needs a subscription.]

The capuchin monkeys working with economist Keith Chen and psychologist Laurie Santos know a good bargain when they see one. They use metal chips as money, buying bits of apple or cucumber from humans, and they seem to know what they are doing. When the researchers make apple cheaper than cucumber – offering more food for the same number of chips – the capuchins opt for the better-value food , as any savvy shopper would. Yet it is not the monkeys’ good economic sense that Chen and Santos find most interesting. Rather it is their tendency, on occasion, to make an irrational deal – and to do so in a distinctively human way.

The capuchins … often make decisions as wisely as any good business person, yet in other cases they appear to succumb to the same irrational temptations we do. And a sense of fairness? Pay one monkey less than another for equal work, and you are likely to get a screeching tantrum, seemingly in protest at gross economic injustice…

…The first surprise was just how readily they took to the idea of money. Despite the fact that capuchins do not usually display social learning – picking ups skills from other members of the group – it took just a few months for Chen and his colleagues to teach them that small discs could be used to buy treats. The monkey’s appreciation for money even extends to trying to counterfeit it – by using slices of cucumber instead – and hiding their own stash, suggesting that they understand it has intrinsic worth. In these respects capuchins seem to have innate economic wisdom much like our own.

They act like people in other, more subtle ways too. In one experiment, Chen and colleagues had the monkeys choose between two apparently different but actually identical gambles. In the first, for the price of one disc, the monkeys got one grape and also a 50-50 chance of getting a second grape, with the outcome determined by a coin flip. Alternatively, the monkeys could choose to start with two grapes but then risk losing one on the flip of the coin. Again, this lead to a 50-50 chance of getting either one or two grapes. The monkeys were able to distinguish between the available bargains because they interacted with two experimenters, each one always offering the same deal. As the chances of ending up with two grapes or one are the same in both bargains, a ‘rational’ individual would be individual about which to take. The real monkeys chose the experimenter offering one grape plus the chance of another about 75 per cent of the time. “We were surprised,” says Chen. “Psychologists we talked to thought the monkeys would simply trade with whomever initially showed the most food.”

There seems to be a parallel in human behaviour. Although the gambles were strictly equivalent, the second involved a potential loss and the first a potential gain, leading Chen to conclude that his capuchins are showing the very same ‘loss aversion’ that researchers have found in humans. Although economic rationality suggests that we should give equal weight to small gains or losses, countless experiments indicate that the pain associated with a loss tends to outweigh the pleasure of an equivalent gain…

To some researchers, the similarity in human and capuchin behaviour suggests an ancient evolutionary origin. “It’s not credit cards and gas prices that make us react irrationally,” Santos suggests, “but something more fundamental that we share with other species.” and if our bias towards loss aversion does have deep origins, it may well be that a behaviour that seems irrational today could have been wise for our ancestors living in very different circumstances. One possibility, Santos believes, is that a heightened fear of losses could have helped our ancestors survive in fluctuating environments…

… Chen believes that economists should already be thinking about the possible implications of these experiments. Loss aversion makes us do some silly things – it explains, for example, why stock markets investors hold on to falling stocks too long and why homeowners may be reluctant to sell their houses at a loss, even when that would be the sensible thing to do. A close evolutionary link between human and capuchin behaviour, Chen suggests, would imply that such behavioural peculiarities may be “hard-wired” into us rather than being learned. As a consequence, economists and policy-makers may find it difficult to alter such behaviour with the usual economic incentives.

Take savings and investments. Most people save too little for retirement, and loss aversion seems to be a primary cause. To begin with, people who do save conscientiously tend to invest less in risky stocks than in safer securities such as bonds, even though stocks, historically, have earned more in the long run. “Loss aversion is one of the most plausible reasons,” says Chen, because stock values fluctuate more strongly than bonds and so an investor in stocks has a greater chance of experiencing a painful loss, even if gains will more than balance it eventually. More fundamentally, putting money away today means losing funds you could spend now, in return for the uncertain prospect of more money in the future. Because many people feel present losses more than the thought of future security, they systematically under-invest.

But by accepting loss aversion as a part of human nature, policy-makers may be able to encourage better decisions. One idea, proposed by economists Richard Thaler of the University of Chicago and Shlomo Benartzi of the University of California, Los Angeles, goes under the slogan of ‘Save More Tomorrow’. Under this scheme, individual employees can elect to have more of their pay put toward their retirement, but only starting next year, with the rate of contribution then rising gradually. In real-world trials, Thaler and Benartzi found that pushing the investment decision into the future, so that the loss feels less painful now, significantly increased the overall investment people made toward retirement.

[Primatologist Frans de Waal and anthropologist Sarah Brosnan] taught capuchin monkeys to trade small rocks for food rewards, serving two monkeys side by side so that each could see the trades offered to the other. At first, the experimenters always gave the monkeys cucumber for their rocks. But then they began giving one monkey a grape, which capuchins greatly prefer to cucumber, or even a free grape without requiring a rock in exchange. They observed that the slighted monkeys often reacted by refusing to trade effectively, going on strike. “In some cases,” says Brosnan, “they’d throw the tokens or rewards back at us.” In others, they would not even eat cucumber they had already ‘bought’. “The moral of the story,” as Brosnan puts it, “is that cucumbers are only bad when someone else has got something better.”

“Capuchin monkeys seem to measure rewards in relative terms,” says de Waal, who suggests that emotions of some kind probably lie behind this behaviour, as in people…

…the broad-brush similarity between humans and capuchins regarding human treatment suggests that something like a preference for fairness could be a deep evolutionary adaptation in primates, rather than something only we humans have learned.”

November 10, 2005

Political Geography

James C. Bennett argues in The Anglosphere Challenge that English speaking societies have a peculiar tendency to seek spatial solutions to social disputes.

English speakers, however much they dispute economic, social, or moral issues, have tended to express these differences by spatial composition or decomposition of their regimes - union and secession - rather than regime decomposition - replacing one constitution with another. [p.193]

Looking back on this history, it is not surprising that Continental European and Marxist ideas of revolution, almost always expressed in regime-composition terms, have never found a natural home in any English-speaking nation. Since 1789, France has had five republics, two empires, two monarchies, and miscellaneous directories, consulates, and so on - but its territorial boundaries are today only slightly different from those of 1789. The United Kingdom has had the same Constitution (much evolved, but built on English roots even older) since its founding in 1707; the United States still operates under the Constitution of 1789, also much evolved, but also very much rooted in the same underlying principles as that of Britain. The borders of both Unions, however, have changed numerous times. Thus, it's worth noting that France responded to a spatial-composition crisis - the Algerian Revolution in 1958 - with a regime-recomposition solution, the transition from the Fourth to the Fifth Republic. In comparison, Anglosphere nations reacted to regime-composition crises such as the Navigation Acts, the slavery issue, or Irish Catholic emancipation with spatial composition solutions. [p.196]

What are the secessions or unions needed today?

Posted by CCRU-Shanghai at 01:57 AM | On-topic (22) | TrackBack

November 06, 2005

Anti-globalization = Pro-poverty

Predictably, the riots in Paris and Brazil have been greeted with gleeful hand-rubbing in moonbat quarters ("I never thought I'd live to see it! The worldwide conflagration has finally arrived!). The familiar suspects are trotted out, lumped together and dealt blunderbuss blasts: the rich, Global Capital, free markets, racism, fascism, neoliberalism, etc.

Chirac and Sarkozy are thoroughly obnoxious – granted. Most sane people would also agree that the economic situation is a primary factor in the Paris riots, though there might be disagreement on what constitutes ‘poverty’ and ‘slums’. However, the suggestion that the ‘poverty’ in the Paris suburbs is largely a result of France’s long-term economic nationalism and determined anti-liberalism might be a bit harder to take on board.

Excerpts from here and here.


“The economic integration of the Continent's 450 million consumers into a prosperous single market—the EU's raison d'etre since its creation after World War II—has come to a virtual standstill. At the same time, growing numbers of Europeans have awakened to the threat of globalization, with little agreement on how to cope. On one side are the core economies of the continent: Germany, Italy and France, all stagnating yet determined to preserve their vision of a "social Europe" that protects citizens from too much change. On the other side: Britain and the Scandinavians, who want to meet the challenges of globalization by staying competitive, flexible and attuned to the fast-changing demands of the market.

If this means a re-emergence of economic nationalism, Europe's economy can only suffer. When France and Italy led a drive to impose EU-wide quotas on Chinese textile imports earlier this year, they may have temporarily saved a few jobs in a handful of factories. But they hurt many other companies, especially retailers, not to mention consumers who depend on cheap Chinese imports. This spring, Germany and France cut down the EU's landmark effort to create a Europewide market in services, which make up 70 percent of the continent's economy. That means they'll forgo an estimated 600,000 extra jobs, according to the European Commission. At best, further integration is now stalled. At worst, the EU could see protective walls between its members re-emerge, putting much more at risk than strategic French casinos.”

“… despite the spectacular rise in living standards that has occurred as barriers between nations have fallen, and despite the resulting escape from poverty by hundreds of millions of people in those places that have joined the world economy, it is still hard to convince publics and politicians of the merits of openness. Now, once again, a queue is forming to denounce openness—ie, globalisation. It is putting at risk the next big advance in trade liberalisation and the next big reduction in poverty in the developing countries.

In Washington, DC, … Charles Schumer threatens a 27.5% tariff on imports from China if that country does not revalue its currency by an equivalent amount. In Mr. Schumer's view, presumably, far too many Chinese peasants are escaping poverty.

And ministers from Bastiat's own country, France, have vied with one another to denounce all talk of further reform to the EU's common agricultural policy. Europe must, they say, remain an “agricultural power” even at the expense of the taxpayer and the poor, and, according to President Jacques Chirac, must fight back “liberalism”. Whatever happened to Liberté, Egalité, Fraternité?

The risk is that failure to agree on a new wave of openness during a period (the past two years) in which the world economy has been growing at its fastest for three decades, with more countries sharing in that growth than ever before, will set a sour political note for what may well be tougher times ahead. A turn away from trade liberalisation just ahead of an American recession, say, or a Chinese economic slowdown, could open up a chance not just for a slowdown in progress but for a rollback. Currently, for example, the Schumer bill to put a penal tariff on Chinese goods looks unlikely to pass. If American unemployment were rising and world trade talks had turned acrimonious, that might change. So might the political wind in many developing countries.

If so, that would be a tragedy for the whole world. Although the case for reducing poverty by sending more aid to the poorest countries has some merit, the experience of China, South Korea, Chile and India shows that the much better and more powerful way to deal with poverty is to use the solution that worked in the past in America, western Europe and Japan: open, trading economies, exploiting the full infrastructure of capitalism amid a rule of law provided by government. In other words, globalisation.”

November 04, 2005

Compelling Viewing

Just when you thought the perils of bird flu, terrorism, bio-engineered viruses, grey goo and pathological AI were enough to be getting on with, thank you very much, Kurzweil brings back an old chestnut to put somewhere near the bottom of your list of worries. But whatever you do, try not to be boring – the fate of humanity might depend on it.

“Our Simulation Is Turned Off

Another existential risk that Bostrom and others have identified is that we’re actually living in a simulation and the simulation will be shut down. It might appear that there’s not a lot we could do to influence this. However, since we’re the subject of the simulation, we do have the opportunity to shape what happens inside of it. The best way we could avoid being shut down would be to be interesting to the observers of the simulation. Assuming that someone is actually paying attention to the simulation, it’s a fair assumption that it’s less likely to be turned off when it’s compelling than otherwise.

We could spend a lot of time considering what it means for a simulation to be interesting, but the creation of new knowledge would be a critical part of the assessment. Although it may be difficult for us to conjecture what would be interesting to our hypothesized simulation observer, it would seem that the Singularity is likely to be about as absorbing as any development we could imagine and would create new knowledge at an extraordinary rate. Indeed, achieving a Singularity of exploding knowledge may be the very purpose of the simulation. Thus, assuring a “constructive” Singularity (one that avoids degenerate outcomes such as existential destruction by gray goo or dominance by a malicious AI) could be the best course to prevent the simulation from being terminated. Of course, we have every motivation to achieve a constructive Singularity for many other reasons.

If the world we’re living in is a simulation on someone’s computer, it’s a very good one – so detailed, in fact, that we may as well accept it as our reality. In any event, it is the only reality to which we have access.

Our world appears to have a long and rich history. This means that either our world is not, in fact, a simulation or, if it is, the simulation has been going on for a very long time and thus is not likely to stop anytime soon. Of course it is also possible that the simulation includes evidence of a long history without the history’s having actually occurred.

…[There] are conjectures that an advanced civilization may create a new universe to perform computation (or, to put it another way, to continue the expansion of its own computation). Our living in such a universe (created by another civilization) can be considered a simulation scenario. Perhaps this other civilization is running an evolutionary algorithm on our universe (that is, the evolution we’re witnessing) to create an explosion of knowledge from a technology Singularity. If that is true, then the civilization watching our universe might shut down the simulation if it appeared that a knowledge Singularity had gone awry and it did not look like it was going to occur.

This scenario is not high on my worry list, particularly since the only strategy that we can follow to avoid a negative outcome is the one we need to follow anyway.” (405-6 TS)