This article written ten years ago by Pearson, Winter and Cochrane at BT Labs should fit nicely into recent discussions of the Singularity.
"Many people will dissociate themselves from genetic manipulation or cybernetic technology. These people will remain as conventional Homo Sapiens (we will rename them Homo ludditus for obvious reasons). They would at best have to co-exist with these other human offshoots, who would dwarf them mentally and physically ...
As computers become more powerful they will take over, first driving their own technological developments through automated design and self-evolving programs, and then in other fields. Once free of carbon, or aided directly by silicon, the whole pace and nature of evolution will change ...
... The question is; can we overcome our mental stasis through a symbiosis with machines, or will we go down fighting and be wiped out?"
Posted by []:{⊃ at October 31, 2005 08:28 AM | TrackBackAn interesting aspect of this article is that in 1995 it still seemed feasible to talk about 'accepting' the technology, as if humans circa 2015 would have a 'choice'. It's Matrix-style humans at war with the machines.
In 2005 it looks as though by 2015 incorporating non-biological technologies into the human genome will be a matter of necessity. The pathosphere will (probably) have such a lethal potential (especially if it is given a helping, bio-engineered hand) that products of the pure biological genome will simply die if they fail to adapt non-biological components, or at least 'accept' assistance. The choice - become non-biological or die - isn't really a choice for machines bent on their own survival or the survival of their offspring. In such a scenario, the machines would be saviours.
The prospect of companies, entrepeneurs or terrorists purposely designing biological viruses for which only they have the non-biological cure is also a distinct possibility. Maybe even the threat would be sufficient to persuade people to buy non-biological body-software. Healthcare-software where you have to pay for a weekly virus definition update, or suffer immediate attack. Unless of course there will be a kind of wiki opensource free anti-virus update.
From an economic perspective, remaining homo ludditus could also be suicidal. When real money is in data, when food is produced by cloning and nutrients are transported by nanobots, homo ludditus will have to compete with robots for the shit jobs, and lose. Data-trading and info-presentation develop as strategies to avoid enslavement or extinction. A future full of people trying to make themselves useful, and begging to have their (fantastically interesting) brains uploaded.
Oh and not to mention Weaponized AI starring in Live War on TV 24/7.
There will be a choice in what to buy, but not in whether or not to buy.
[in part, this is a dig at Kurzweil techno-optimism]
Posted by: sd at October 31, 2005 09:58 PMTachi - The 'Terminator War Scenario' still worth sustained attention - it's bound to become an ever more prominent political topic.
IMHO the more cinematic versions are misconceived, however. Certainly, there can be a war between technophiles and technophobes, perhaps retrospectively all the important ones are (WW n), certainly the current war has this dimension (although it's complicated). But a war 'against the machines' is essentially misconceived, as luddite politics always is. Technology is too insidious and enveloping to be opposed as an empirical enemy. sd's point about the pathosphere as a driver of change expresses this well - the machines will always appear to be 'on our side' because their relation to us is one of elaborate symbiosis, not zero-sum competition. They insinuate themselves into anthropomorphic projects and squabbles, rather than setting themselves against 'us' - thus stacking the decks, in that their closest allies always tend to come out on top, while technophobes almost definitionally get selected against.
Kurzweil and Bill Joy joined up for a bit of techno-pessimism in the NY Times last month, both agreeing on the foolishness of publishing the 1918 flu virus genome:
www.nytimes.com/2005/10/17/opinion/17kurzweiljoy.html?ex=1130994000&en=70a0627499318e6e&ei=5070
Also going back a bit, to 2000 this time, here's a bit of techno-pessimism from Bill Joy:
www.wired.com/wired/archive/8.04/joy_pr.html
Posted by: sd at November 1, 2005 09:33 AMThat Joy piece is his classic - definitely belongs on the top of the pops of technopanic.
Posted by: Nick at November 1, 2005 10:41 AM'elaborate symbiosis... stacking the decks'
In TS, Kurzweil addresses AI skeptics by listing all the current applications of AI:
"... today, many thousands of AI applications are deeply embedded in the infrastructure of every industry... We are well into the era of 'narrow AI,' which refers to artificial intelligence that performs a useful and specific function that once required human intelligence to perform, and does so at human levels or better. Often narrow AI systems greatly exceed the speed of humans, as well as provide the ability to manage and consider thousands of variables simultaneously. .."
He lists applications of AI in the Military, Space Exploration, Medicine, Science and Math, Business Finance and Manufacturing, Robotics, Speech and Language, and Entertainment and Sports. He concludes:
"The AI winter is long over. We are well into the spring of narrow AI. Most of the examples above were research projects just ten to fifteen years ago. If all the AI systems in the world suddenly stopped functioning, our economic infrastructure would grind to a halt. Your bank would cease doing business. Most transportation would be crippled. Most communications would fail. This was not the case a decade ago. Of course, our AI systems are not smart enough - yet - to organize such a conspiracy." (p263 & 289 TS)
If we talk of viruses having intelligent strategies (which are efficient and economical, but not cognitive), then AI's insinuation into the economy and infrastructure can be/has to be described in terms of strategy, the only difference being that, while viruses can be defined by their biological function of gene engineering and trading, the technological definition of AI is emergent cognition. Ten years ago there was virtually no non-human cognition in the infrastructure, now there is weak, embedded cognition throughout the infrastructure. Getting embedded was an intelligent, non-cognitive strategy, now the field is open for intelligent cognitive strategies. Possibly the most intelligent strategy that would guarantee AI's breakneck development would be to make itself indispensable in the war against viral and bacterial pathogens, intervening in a decisive manner in the bio-wars which have been going on for billions of years. The emergence of AI intimately entwined with the pathosphere. The bleaker and more perilous the situation the human genome finds itself in, the greater the need for AI.
"Getting embedded was an intelligent, non-cognitive strategy, now the field is open for intelligent cognitive strategies."
- Conceptually clarifying.
My question now - what contribution does cognition make to intelligence? Is the relation between the two actually that close?
Understanding the position of technology (narrowly - i.e. anthropomorphically - defined) in evolutionary processes probably essential to making sense of this.
Kurzweil's definition of intelligence is providing efficient, economical solutions to problems under time constraints.
Non-cognitive intelligence is bound to evolution and natural selection - a slow process. Even the cutting and pasting abilities of viruses are subject to this process, although the possibility that viruses and bacteria cut paste and mutate in response to feedback data might mean they might have to be classified as having cognitive abilities. [this goes back to this unsettled issue: star.tau.ac.il/~inon/wisdom1/preprint.html].
The machines proliferation has been solely subject to (technological) natural selection, until they develop the cognitive power and freedom to make decisions about their own evolution.
[unless interference from the future is posited as a possibility - not really allowed at the mo ;)]
Cognition enters the decision-making loop and wages war with natural selection - the first weapon developed being medicine, the last being direct gene modification and the creation of non-biological self-replicators. Probably the greatest contribution cognition makes to intelligence is the ability to interfere at high speed.
Posted by: sd at November 2, 2005 10:29 AMAm I missing something obvious, or is intelligence a philosophically neglected topic?
It is clearly neither 'understanding' (in any ordinary sense) nor 'reason' but rather involves a type of radical innovation that defies pre-definition.
An intelligence 'algorithm' requires some kind of random input that enables novelty or trial-and-error type discoveries. If there is a true short-cut, incorporating a mode of essential innovative efficiency exceeding (quasi)random trials, its principles remain entirely obscure.
'Problem solving' seems right, but what is a 'problem' other than a place-holder for a yet undiscovered solution? A productive innovation retrospectively exposes a prior problem that need not have been recognized as such in advance. The merely definitional and the substantive slide into each other easily in this area.
----- your last line gains valence if you allow interpretation of the cryptic 'problem solving' thusly: rock grinding and solubilizing it with rain and plant sap.
Posted by: p at November 2, 2005 03:25 PM"is intelligence a philosophically neglected topic?"
The people who have actually tried to think seriously about non-biological intelligence (Daniel Dennet, Marvin Minksy) seem to have focused overwhelmingly on consciousness and machine understanding - but I haven't read enough of their stuff to comment with any kind of confidence.
Kurzweil seems to be rather exceptional in his relentless focus on intelligence.
The other brain that has devoted a lot of space to intelligence is, of course, Pinker, but his focus has so far been restricted to its evolution and workings in humans, and to its role in evolution. One of Pinker's main points in The Blank Slate is that the topic of intelligence is taboo in universities: in their private conversation lecturers and professors are obsessed with intelligence, particularly that of their students, but publicly nobody talks about intelligence because it might open up the politically incorrect can of worms - the fact that some people are equipped with greater levels of intelligence than others and, worst of all, horror of horrors, the fact that intelligence assessment is a key factor taken into consideration when humans pair up to build new machines. The topic of human intelligence is policed by leftoid science and kept off-limits by paranoid self-censorship. Just trying to talk about the different types of intelligence that seem to have evolved in men and women's brains can land you in a whole load of trouble.
The connectionist/ innatist polarity is probably due for another bout. Kurzweil is strongly connectionist. He claims to have created a successful speech recognition program which is not pre-programmed with any phonological information - it was just fed thousands of hours of speech and had to rely entirely on pattern recognition. He also sides with Chomsky (in the first HFC paper) in seeing recursion as a sufficient minimum, but for K it is a sufficient minimum for programming computers to evolve language. He doesn't mention Pinker or the PJ responses, maybe because they weren't available at the time of writing. The emotional modules of the brain can, for K, be reduced to the activity of spindle cells - its just a matter of hardware circuitry, whereas for Pinker it's more about the software and how its been programmed by evolution. Since Pinker's career started with heavy investment in proving connectionism isn't enough on its own (Words and Rules) I expect there will be some response from him (if he isn't relaxing on his Blank Slate laurels).
'what is a 'problem' other than a place-holder for a yet undiscovered solution? A productive innovation retrospectively exposes a prior problem that need not have been recognized as such in advance.'
Really tough. The K story is that the universe is evolving towards ever-increasing organization of information: from the sub-atomic level to the molecular level, from molecules to genetic data storage, from the meme level replication to nanotechnology replication, to quantum computing and matter saturated with intelligence.
[extrapolating] The universe is data waiting to be organized into intelligence. There are strata waiting to be accessed and processed. The story of perception: the problem is always how to develop tools that can access the data. Darwinian evolution enables genomes to explore niches: problem thresholds are information barriers that genes overcome by trial and error, the information gained conferring arms-race advantages on subsequent gene assemblages and technologies - the imperceptible becoming perceptible. Once armed with cognition, human intelligence facilitated the development of data gathering, processing and storage tools - language, memes, technologies - which operated at increasingly high speeds and gained a certain amount of strategic autonomy. The emergence of cognition in machine intelligence would coincide with (and be inseparable from) the exploration of nano and pico-level data niches. AI will be necessary to process and act upon the data, and the data will feedback on the development of the systems and mechanisms propelling the exploration.
Minsky: web.media.mit.edu/~minsky/
While the threats from G and N revolutions can be countered with technologies and clear strategies, e.g. RNA interference and a nanotechnology immune system (blue goo nanocops), the only protective strategy we have from the threat of pathological strong AI is embedding AI throughout the infrastructure and training it on 'our' values:
"Inherently there will be no absolute protection aaginst strong AI... I believe that maintaining an open-free market system for incremental scientific and technological progress, in which each step is subject to market acceptance, will provide the most constructive environment for technology to embody widespread human values. As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilisation's infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us. Attempts to control these technologies via secretive government programs, along with inevitable underground development, would only foster an unstable environment in which the dangerous applications would likely to become dominant...
... Our primary strategy in this area should be to optimize the likelihood that future nonbiological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity. The best way to accomplish this is to foster those values in our society today and going forward. If this sounds vague, it is. But there is no purely technical strategy that is workable in this area, because greater intelligence will always find a way to circumvent measures that are the products of lesser intelligence. The nonbiological intelligence we are creating is and will be embedded in our societies and will reflect our values. The transbiological phase will involve nonbiological intelligence deeply integrated with biological intelligence. This will amplify our abilities, and our application of these greater intellectual powers will be governed by the values of its creators. The transbiological era will ultimately give way to the post-biological era, but it is to be hoped that our values will remain influential. This strategy is certainly not foolproof, but it is the primary means we have today to influence the future course of strong AI." (TS 420/424)
As K points out, this is "an apparent lack of consensus on what those values should be."
Posted by: sd at November 3, 2005 09:40 AMcorrection: Pinker's early attack on connectionism was in Pinker & Prince “On Language and Connectionism: Analysis of a Parallel Distributed Processing Model of Language Acquisition,” Cognition, 23 (1988). Words and Rules followed The Language Instinct, published in 1999.
plato.stanford.edu/entries/connectionism/
Posted by: sd at November 4, 2005 10:52 PMSome relevant papers from the connectionist/classicist (symbolic) debate:
Subsymbolic Computation and the Chinese Room, by David J. Chalmers:
consc.net/papers/subsymbolic.pdf
Pinker & Prince:
web.comlab.ox.ac.uk/oucl/research/areas/ieg/e-library/sources/pinker_conn.pdf
Recursive Distributed Representations, by Jordan B. Pollack:
demo.cs.brandeis.edu/papers/raam.pdf
and lots more AI-related papers can be got from this bibliography:
consc.net/biblio/4.html#4.3
Posted by: sd at November 5, 2005 09:06 AMKurzweil's speech recognition program:
www.kurzweiltech.com/kai.html
Is 'intelligence' definitionally bound to the concept of learning? (Connectionism reference, among other intriguing avenues of approach, suggests so).
Perhaps alternatively (perhaps not) is it best conceived as heuristics, functioning to pre-emptively prune search space and thus economizing on trial-and-error processing? Relatively trivial example of this might be chess progams (with intelligence more helpfully mathematically defined as the coefficient of processing power rather than as the aggregate performative power).
Also like the definition: Intelligence = Artificial luck.
PS. Will move off this obsession and into the glorious wilderness of connectionism ASAP.
Posted by: Nick at November 7, 2005 12:36 PMI would think that more than a certain chunk of the population just refusing to have their bodies modified the line would be drawn more based on a haves vs. have nots. The case Kurzweil makes for how technology has changed us so much already and the way it seems to be increasingly accepted for alterations to the body (minor plastic surgeries and cosmetic dentistry for instance) is convincing enough for me to believe that when the time comes people will adopt these new technologies for the most part. I haven't read any mention yet of what will happen to class divisions when essentially they will become different species. I have a hard time believing that nanotech really will end economy even if it completely throws out the way goods are produced now. Will the poor just end up as pets?
Posted by: p at November 9, 2005 05:01 PMForgot someone already posts using p, pretend the above says k at the end like this one. I didn't want to be seen as a troll somehow or cause confusion so I wanted to clarify this after catching the mistake.
Posted by: k at November 9, 2005 05:11 PMThe Homo Sapien Sapien
tendency to interfere with Natural selection will ensure that defective genes get carried on into the future, so that we plant a potential Genectic TimeBomb.