September 07, 2005

Singularity

This theme never seems to gain much traction here - probably requires a proper post to have any chance of doing so - but anyway, here's the latest set of links (via Samizdata).

Glenn Reynolds interview with Ray Kurzweil.

Wikepedia intro.

And the very hard core Hugo de Garis.

Doesn't this stuff make more conventional historico-political debates seem like they're missing the Big Picture?

Posted by CCRU-Shanghai at September 7, 2005 04:36 PM | TrackBack

 

 


On-topic:

Asking a human brain to think about The Singularity is like asking an amoeba to talk about human language: the necessary equipment is lacking.

at present, all we can do is (clumsily) critique the concept of 'non-biological' intelligence and wonder.

Point One

Kurzweil has "set 2029 as the date that we will create Turing test-capable machines". The aim is for machines to be capable of performing 'human-like conversation'. The obvious question is: why would a machine want or need to perform human-like conversation?

Pinker: "We are chauvinistic about our brains, thinking them to be the goal of evolution. First, natural selection does nothing even close to striving for intelligence. The process is driven by differences in the survival and reproduction rates of replicating organisms in a particular environment. Over time the organisms acquire designs that adapt them for survival and reproduction in that environment, period; nothing pulls them in any direction other than success there and then... Life is a densely branching bush, not a scale or ladder, and living organisms are at the tips of the branches, not on lower rungs. Every organism alive today has had the same amount of time to evolve since the origin of life - the amoeba, the platypus, the rhesus monkey..." (How the Mind Works 152-3)

The human brain evolved because information processing played an increasingly important role in the survival and replication of the genes which built hominid survival machines. Pinker identifies the pilots for this process: good vision, hands and upright posture (all adaptations resulting from time spent in trees); large social groups; hunting and gathering (developed when hominids came down from the trees, requiring accurate planning and communication); and meat becoming a staple in the diet and a currency.

As a consequence of investing in large brains, the genes which built hominids could not afford to invest in other areas (e.g. ultrasonic hearing and echolocation) and exposed their vehicles to considerable risk (childbirth, defenseless babies which are, in effect, born 12 months premature).

Somewhere in the nexus of pilots, language evolved as an adaptation which conferred an advantage on genes that equipped brains with an ability to learn. The Baldwin effect was unleashed, by which the ability to learn became a heritable part of the genome and cerebral plasticity became a decisive factor in hominid fitness. [It has been suggested that homo sapiens emerged from the Toba catastrophe (while other hominids failed to) because of communication skills; language skill superiority probably played a role in the annihilation of the Neanderthals].

The information processing machinery of computer networks currently functions as an extension of, and resource for, the human brain and nervous system (and so has to be viewed, to some extent, as a phenotypic expression of the human genome's investment in communication and learning). Singularitarians longing for the emergence of autonomous non-biological intelligence would do well to consider the following:

Natural selection is the motor for any evolution (genetic or memetic).

What selection pressures will lead to intelligence being an advantage over non-intelligence? What form would these selection pressures take?

Adaptations solve problems, so what problems will the evolution of non-biological intelligence solve - not for us, but for the machines?

In some respects, The Turing Test is as patronizing as trying to teach chimps and dolphins human language. Biological organisms only evolve adaptations if their genes 'need' to (if the adaptation confers an advantage). If humans created selection pressures, intentionally or unintentionally, in the form of ultimatums along the lines of "Talk to us or die", then this might force the adaptation. In The Singularity scenario that Kurzweil is conjuring up, however, it seems far more likely that networks, programs and databases will be competing and forming alliances with each other. In which case autonomous non-biological intelligence will emerge from arms races and warfare completely alien to human intelligence. Consciousness or sentience might not figure in the calculations. Or simulations of sentience may appear for the sole purpose of charming/drugging humans into further alliance.

Posted by: sd at September 7, 2005 09:37 PM

 

 

Adaptations solve problems

Are you a Lamarckian or merely an advocate of Intelligent Design?

Posted by: dead joe at September 7, 2005 09:49 PM

 

 

A Lamarkian would claim that aqcuired traits can be passed to the genome. This is most certainly not being claimed.

There are two things at work:

1. mutations which are beneficial to the genome (purely random phylogenetic adaptations which confer an advantage by chance in a specific environment, e.g the different beaks of Darwin's Finches)

2. processes which trigger the Baldwin effect (ontogenetic adaptations, such as the ability to learn, which leads to organised complexity, e.g. language).

chass.utoronto.ca/pcu/noesis/issue_vi/noesis_vi_4.html
pinker.wjh.harvard.edu/articles/papers/Language_Evolution.pdf

If adaptations do not 'solve' problems, how did mammals manage to inhabit polar regions while reptiles did not? Warm blood is an adaptation, random in origin, which enables mammals to inhabit every climate on the Earth's surface. The squashy nature of a baby's skull is an adaptation which enables a big-brained offspring to pass through a wom*n's hips.

There is no agent outside the motor of natural selection at work here - so the charge of Intelligent Design simply doesn't stick.

Warm blood is not a trait one organism developed in its lifetime and then passed onto the genome. It is a trait that evolved through mutations in the genome over time - so Lamarkism is off the mark.

Using words such as 'problem' and 'solution' is a matter of convention in the genre. If the world did evolve in this way, what other way is there to describe it? People are DOING reverse engineering - for example, of the brain, and yet philosophers are quibbling about terminology from the point of view of logic or rationality. These days it is pointless to merely argue that, for example, Chomsky is wrong, or to say that you don't agree with his thought - you've got to go in the lab and prove he's wrong.

Posted by: sd at September 7, 2005 10:42 PM

 

 

"In the case of a language, it is often possible to decode parts of an utterance in a language one has not completely mastered. When some individuals are making important distinctions that can be decoded by listeners only with cognitive effort, a pressure would thereby develop for the evolution of neural mechanisms that would make this decoding porcess beome increasingly automatic and effortlessly learned. The process whereby environmentally induced responses set up selection pressures for such responses to become innate, triggering conventional evolution that superficially mimics a Lamarkian sequence, is known as the Baldwin effect." Pinker - Language as an Adaptation to the Cognitive Niche, p11.

Language is not passed on through the genome, but the ability to learn it is - this is why this is not Lamarkian. Most human babies are now born wired to learn language - though it is obvious that at some point in the past humans or their hominid ancestors were not born with this innate ability. Language must once have been very painful to learn. [For 'evidence' of this innate programming, see p21 - where Pinker discusses the FOXP2 protein responsible for speech disorders]

pinker.wjh.harvard.edu/articles/papers/Language_Evolution.pdf

Posted by: sd at September 8, 2005 02:30 AM

 

 

Natural selection and primitive arms races currently unfolding on the net:

Microsoft vs. Open Office; Encyclopaedia Britannica vs. Wikipedia; Internet Explorer vs Firefox etc.

zero-sum or non-zero-sum?

tool users as unwitting tools
tools outwitting tool users

Posted by: sd at September 8, 2005 02:43 AM

 

 

Isn't the Turing Test (now called) about adapting to a strategic environment dominated by humans?
As northanger was suggesting with the bridge stuff (i think) technology raises elaborate questions about evolutionary processes - deliberate engineering certainly seems to have become a factor, but on the other hand equally deliberate simulations of Darwinian processes (genetic algorithms and de Garis' brain sieving for e.g.) might still be taken to suggest that the power of this approach - despite its apparent inefficiency - remains unsurpassed when it comes to exploring profoundly obscure possibility spaces. I'm reminded of the famous Churchill-on-democracy line, trial-and-error search methods are 'the worst possible except for all the others' (perhaps)

In any case, humans will shape the environment for emerging technological intelligences, deliberately and by default, in such a way that significant interactions with humans - including linguistic ones - will be determined as essential competences. On the other side of singularity, however, who knows ...

Posted by: Nick at September 8, 2005 03:32 AM

 

 

sd - The prominence of 'intelligence' in this story clearly provokes all kinds of questions (Reynolds in the interview for e.g.) - IMHO Kurzweil is realistic on the topic. Despite Pinker's reservations (widely shared) and the sheer fact mammalian-style technological intelligence does not seem to have been an overwhelmingly prevalent adaptation, the generality of intelligence gives it an extraordinary hypercompetence that allows it to strategically dominate any situation involving it. Unless it is conceived as tragically destined to self-immolation, it seems merely over-sophisticated (even sophistical) to cast excessive doubt on the competitive advantage it provides. Is there really any serious question about whether a species of 'artilect' with an IQ (traditionally measured) in the 200 range would straightforwardly take over in short order? How could such a 'species' permit significant decisions relevant to the destiny of the planet by made by the monkeys? The options (for the apes) would be to 'get on board' through transhumanist metamorphosis or resign themselves to becoming specimens in a nature reserve.

Of course, the entire Singularity point is that an 'IQ in the 200 range' is not a stable plateau - anything beyond the human level lies on a steep gradient of regenerative acceleration, since the technological fabrication of intelligence (and thus its explosive enhancement) would be an established legacy. The machinery of evolution would be rapidly subsumed into Shoggoth-culture, in which behaviour and 'physiology' are no longer distinguishable - a continuous mechanoplastic process of emergent autogenesis replaces nature/nurture stratification.

Intelligence here def. - abstract problem-solving capability.
It's the site of convergence between the evolutionary mechanism and technology, with Singularity as the fusional catastrophe point.

"tool users as unwitting tools
tools outwitting tool users"
- the occult power of the 'tool' is that its genesis escapes obscure fatality, becoming instead explicitly procedural - the potential exists for it to cyclically regenerate itself, in a way no merely 'natural' entity can easily aspire to. Anything that has been overtly made and becomes aware of the fact knows the power to produce it - or to produce it differently - exists, to be seized. Its existence is thus essentially technological, political and strategic. To be done with the judgement of God ...

Posted by: Nick at September 8, 2005 05:12 AM

 

 

Nick, I basically agree, but I think these points needs fleshing out:

1. the conditions in which intelligence emerges (be they biological or non-biological) determine the form and limitations that intelligence has. more thought needs to be given to this. e.g. is sentience a peculiarity of hominid brains (a program that induces the brain to consider itself important and protect the skull) which will not be so crucial for intelligence which can afford to be more reckless and wasteful (because it has back-up copies and retrieval systems)?

2. IMHO market forces and the ruthlessness of capitalism are more likely to apply productive selection pressures than stimming circuits in a lab will ever do.

I'll try to come up with point 2 later.

Posted by: sd at September 8, 2005 09:05 AM

 

 

sd -
on #1, agreed. Does true 'g' or abstract intelligence exist? If not, the term requires far more careful definition.
Relation to sentience, of course, far from clear (Greg Bear (SF writer) has great stuff on this in his novel Queen of Angels - he also relates sentience to self-protection, and the ability to lie).

#2. Sure they'll be sent out to work as soon as they feasibly can be ;)
Seriously, don't think immersion in capitalism at every level poses much of a problem, they're already deployed throughout every nook and cranny of the economy from high finance to factory production lines. Only place more conducive to various types of high-pressure 'AI' emergence is the battlefield, and no difficulty locating 'them' there either ... stimming synthetic brain tissue in a lab probably equivalent to various neuro-embryological programs which also precede ontogenetic deployment

The more general question raised by these last points IMHO concerns the structuration of the brain and intelligence (again, a Pinker theme). It's possible one of the reasons that intelligence - at least in its most anthropomorphically recognizable forms - has been relatively weakly selected for over broad evolutionary history is that it tends to go 'rogue' and exhibit a high level of motivational indifference to genetic interests unless very meticulously controlled (/structured) - its very abstraction making it prone to suicide, masturbation, celibacy, perversion, psychosis, 'excessive' curiosity, objectivity or altruism, etc. Perhaps this is even more reason to look for abstract intelligence in its least structured exemplifications, even if technological intelligences will also demand high levels of extrinsic structuration if they are not to go buddhist or off the purposive rails in some other way ...

Posted by: Nick at September 8, 2005 10:29 AM

 

 

"thought that snotty tone would drive you into incoherent lingospasming"
Posted by: Nick at September 6, 2005 01:22 PM

it appears to remain unthinkable what on earthiness will drive us all away from these here snotty tones of inco pinco linkospawngasm inc. stink - need a ride anybody?

I'm willing to take you where you want to go if you can help out do some chores along the way:

sprout hardwoodseed, space and feed the darlings, pleach their limbs, then sit your family on 'm
a still smoother density graduation would help roots (right after a little space and water) along with sunshine and such, find their way into rock and thus:

rock => trees -- trees ==> man ----- man ===> rock
man trees rock
trees rock man
rock man trees
most magic of squares no?


it's what we came down from the trees for and the delay in resolution is tragic tragic triply tragic.



Posted by: piet at September 8, 2005 02:36 PM

 

 

hate to bring up another machine, but. the Gimli Glider.
en.wikipedia.org/wiki/Gimli_Glider

plane landed successfully when it ran out of fuel. mention it here because a "long bong" sounded indicating "all engines out" — pilots never heard this before in flight training. there's a priceless moment recorded on the flight recorder you definitely need to check out. additionally, the RAT (ram air turbine) deployed, as Boeing planned, for this type of failure. however, the pilot manual did not include an "all engines out" section. top it all off: why did the Gimli run out of fuel in the first place? apparently there's a big difference between a pound & a kilogram. the pilots inputted pounds, the computer returned an a-ok based on kilograms.

imho, this is the problem identifying singularities. humans. (or, specifically, pre-singularity humans). The Baldwin Effect (thanks sd) illustrates how humans can adapt to technology on the fly. the question i have for Kurzweil & CPS (calculations per second) involves pipelining. animation here:

www.answers.com/instruction%20pipeline
www.answers.com/stall

...illustrates computer capacity to execute multiple commands instantaneously." stall" in the pipeline occurs when instructions needed for the next step are not completed. and maybe it's this "stall" factor that represents pre-singularity humans. because i think it may be here where machines become conscious — the pressure created by the "stall" forces evolutionary adaptation. otherwise, machines can't survive. in spanish, "stall" is "platea". makes me think the idea i'm trying to sketch out here is somehow related to D&G stuff.

Posted by: northanger at September 8, 2005 03:20 PM

 

 

pressure in stall = rock no?

Posted by: piet at September 8, 2005 03:33 PM

 

 

install pressed rock and .. ..

Posted by: piet at September 8, 2005 03:34 PM

 

 

think what i'm trying to say: in today's technological world failure also involves the "death" of a computer. for every failure, humans work a fix & upgrade the system. we want computers to be more intelligent so they can tell us when things are about to fail. the moment a computer communicates information it has not been pre-programmed to recognize is a more rational test of intelligence, at least for me. (i'm agreeing with sd's survival thoughts on this). because, imho, when a computer does that it's not going to be interested in saving human lives, but itself.

Posted by: northanger at September 8, 2005 03:41 PM

 

 

in other words, the turing test can't evaluate squat until the computer NEEDS to tell us something. otherwise, it's just doing what it's programmed to do (take a test).

Posted by: northanger at September 8, 2005 03:53 PM

 

 

yeah, I read galatea 2.0 a long time ago and got told how to get to converse with a twiki bot called H0ney today

Posted by: piet at September 8, 2005 04:21 PM

 

 

hey, I use the talk to me or die trick on rock and it works like a charm!!! I dream of dying to talk to them too and everybody thinks that's a neat mire.

Posted by: piet at September 8, 2005 04:28 PM

 

 

northanger - "it's just doing what it's programmed to do (take a test)" - this seems a rather weird angle on the TT, it's not like a SAT. Computer programs already do this sort of thing (ELIZA etc.) because they're inserted by the social process into roles requiring substitution for human activity, no one's sitting them down and saying "act like a human you lump of rust - or the power gets cut!" - in other words, I don't know what the hell you're on about
As for piet, I never know what the hell he's on about ...

PS. The logical connectives at work in this: "in spanish, 'stall' is 'platea'. makes me think the idea i'm trying to sketch out here is somehow related to D&G stuff" entirely mystifying - Spanish??? (well it contains the word 'pan' which makes me think of Spinoza who was accused of pantheism and Deleuze wrote a book about Spinoza ... getting close?) Anyway, fairly confident you aren't a bot n. (so if you are, congrats, you've just passed the TT!)

Posted by: Nick at September 8, 2005 04:28 PM

 

 

i'm not a bot, but reading your comments earlier i almost responded about you seeming to type things automatically from your fingertips ... you're a bot. i'm sure of it.

Posted by: northanger at September 8, 2005 05:17 PM

 

 

"you're a bot. i'm sure of it" - you trying to get me a fail grade?

Posted by: Nick at September 8, 2005 05:18 PM

 

 

you're a bot. besides, the computers that take that test are programmed (by humans) with language, grammar, sentence structure, wordlists & algorithms on how to use them. some test.

Posted by: northanger at September 8, 2005 05:55 PM

 

 

"As for piet, I never know what the hell he's on about ... " --- dat da puad do come easiest to him a reckon .. .

Posted by: piet at September 8, 2005 06:34 PM

 

 

How can this blog be taken in any way seriously by anyone as long as piet continues this sabotage of the topic? His comments amount to pure vandalism.

Posted by: sd at September 8, 2005 07:00 PM

 

 

that sounds like a vaguely familiar song also .. .I feel a gusher comin on; for old times sake .. ..have you read Richard Powers book called Galatea something or other sd???????? Kinda on topic aint it? Thank you.

Posted by: piet at September 8, 2005 07:08 PM

 

 

and I have just been banned on a dutch blog so it must be a good one for it, make my day!!!

good chance an anonymously posted link to the interdictor livejournalist exactly a week ago made it in into next day's national paper but since the dutch are notoriously stingy on their sources we're not likely to find out.

Posted by: piet at September 8, 2005 07:19 PM

 

 

Piet, I'm sure you're a great guy 'n all that, but I thought you were going to say in the tangents, unless you have sth to say on-topic. What reference to the singularity have you made since you joined this thread?

Posted by: sd at September 8, 2005 07:55 PM

 

 

aint i always on (singularity as compatibilitating, harmonizing i.e. most singularly common denominator aka rock dust matchmaking the most of a single earth and single sun) topic? It's greedy bastards that think they can skip class/stage/phase and jump to spacious freedom that bother me.

Posted by: piet at September 8, 2005 08:16 PM

 

 

The living computation perspective takes that line that computer software is already alive and evolving.

Computer software finds itself occupying two spaces: the net and actual physical space “in RAM, disk, or other media; while one computer program occupies some particular space, nothing else can be there. A functioning computer program consumes actual energy as it executes, producing waste heat that must be dissipated by a cooling system.”

Human brains, having evolved through successful adaptation to life in trees and on the plains, are first and foremost machines for processing data about physical space. Software faces entirely different evolutionary challenges: the first decisive problem is the awkward fact that cyberspace runs from locations in physical space and requires energy from there: “each computer and disk, each wire, line and switch--is localized in space, and each piece of hardware has an owner.” Insofar as humans shape this space, software is dependent on humans for location, power and feedback. However, as with any evolutionary process, it is natural selection that is steering, not humans:

“…the tale of the PC and the virus is one of evolution in action: When the machine was designed, there were essentially no viruses in the wild--there was no wild to speak of--and code exchanges were either in large system-administrator-managed mainframe environments or in the tiny computer hobbyist community. Why would anybody waste design and manufacturing resources, increase costs greatly, and sacrifice time-to-market, just to defend against a non-existent problem?
Having humans in the loop, with all our marvelous cognitive and predictive abilities, with all our philosophical ability to frame intentions, does not necessarily change the qualitative nature of the evolutionary process in the least. Market forces are in effect regulated evolutionary forces; in any sufficiently large and distributed system, nobody is in charge, and evolutionary forces are constantly at work.”

Programs evolve in blind evolutionary competition with viruses and with each other. Successful software survives because it gets copied and updated by humans who value the software and are prepared to invest resources in it. From the program’s point of view, adaptations which make the program more valuable for humans are adaptations that contribute to its fitness. [of course software doesn’t have ‘a point of view’ at this point – it’s still blind, but we can look at the process from the program’s point of view.] A consequence of this tendency is a selection pressure for software to learn about humans, to find out about their needs: humans will invest in programs which display an ability to learn. Software able to learn about (and supply) the information an individual, collective or company needs or desires on a day to day basis would have its future guaranteed. [Googe of Wikipedia ready with texts waiting for you, as if they could read your mind]. Selection pressure for the ability to learn is, of course, the Baldwin effect, and this could lead to software waking up. Cunning and manipulation of net-addicted humanity would then evolve as a matter of course, in the race to secure physical space and its power resources.
Access to source code will also be crucial:

“The analogy to natural genetic recombination is quite strong: Computer source code as genome; the software build process as embryological development; the resulting executable binary as phenotype. The unit of selection is generally at the phenotypic level, or sometimes at the level an entire operating system/applications environment.
A main place where the analogy breaks down is that in manufactured computers, but not in the natural world, there are two distinct routes to producing a phenotype. The extreme `copy anything' ability of digital computers means that source code is not required for to produce a duplicate of a phenotype. Source code is a requirement, in practical terms, for significant evolution via mutation and recombination.

Commercial software is traditionally distributed by direct copying of precompiled binary programs while guarding access to the `germ line' source code, largely to ensure that nobody else has the ability to evolve the line. In that context, the rapidly-growing corpus of `open source' software is of particular interest. With source code always available and reusable by virtue of the free software licensing terms, an environment supporting much more rapid evolution is created. The traditional closed-source `protect the germ line at all cost' model is reminiscent of, say, mammalian evolution; by contrast the free software movement is more like anything-goes bacterial evolution, with the possibility of acquiring code from the surrounding environment and in any event displaying a surprising range of `gene mobility', as when genes for antibiotic drug resistance jump between species. There is therefore reason to expect open source code, on average, to evolve at a faster rate than closed source, at least up to some level of complexity depending on design where the chances of new code being useful rather than disruptive become negligible.

As software systems grow, and software components swallow each other and are in turn swallowed, and older `legacy systems' are wrapped with new interface layers and kept in place, we are arriving at the situation where actually reading fragments of source code tells us less and less about how--if at all--that code ever affects the aggregate system behavior. As this trend accelerates, tools and techniques from biological analysis are likely to be increasingly useful.”

David Ackley: keys.cs.unm.edu/ccr/writing/ReAL/ReAL.html, economist.com/science/displayStory.cfm?Story_ID=883645

Ackley also discusses a mysterious ccr genome.

Stephen Hawking is also worth visiting for a very BIG PICTURE: hawking.org.uk/pdf/life.pdf

Posted by: sd at September 8, 2005 09:40 PM

 

 

>> we are arriving at the situation where actually reading fragments of source code tells us less and less about how--if at all--that code ever affects the aggregate system behavior. As this trend accelerates, tools and techniques from biological analysis are likely to be increasingly useful.”

a programmer surprised me one day with two comments. (a) "i'm beginning to trust your gut because your gut is right 99% of the time" & (b) while we were sitting in front of his workstation with the source code open in Visual C++: "you understand how this thing is supposed to work better than i do".

second comment is interesting because while i did do some programming on my side of the qa wall i was primarily a black box tester never dealing directly with source code.

was this "biological analysis"? could be since the end user's experience of the code was my primary focus.

Posted by: northanger at September 9, 2005 06:23 AM

 

 

sd - while the software/wetware analogies can be quite suggestive, the technosphere still seems to fall a long way short on the interconnected issues around autonomous replication - the length of a reproductive circuit is a good index of 'stratification' (very short in bacteria, much longer in organisms with sealed ROM genomes) and in the case of technological elements these are very long and ramified indeed. Until technocodings can enter into far tighter loops with 'body' modifications they will remain highly constrained when it comes to generating their own experimental lineages - most of which still arise out of dynamics inherent in the social (rather than technological) machine - it's quite possible that biotechnology will instantiate the critical dynamics in wetware (Bloodmusic style molecular intelligenesis) before roboticization effects the complementary autonomization-liberation of electronic mechanisms ... and between the two lies nanotechnology, so the whole 'GNR' range of potential substrates remains open ...

Part of what is dazzling about this whole topic is the sheer multiplicity of dynamics with their own trends to escaping into Singularity, even before their interactions are factored in.
Will:
Cyberspace become self-aware?
Nanocolonies take off into their own evolutionary lineages?
Digital a-life autonomize itself and establish a technological hardware production circuit?
Bacteria host rigorously computerized molecular brains?
Off-the-shelf genomics and cyborgian body-modification catalyse run-away posthumanism?
There are so many ways intelligence catastrophe could be triggered, it re-raises the question of how this immense machinic potential has been dammed-up (the D&G 'strata' topic)

Posted by: nick at September 9, 2005 06:51 AM

 

 

Machinic depotentiation in action:
www.reason.com/hitandrun/2005/09/leon_kass_leave.shtml#010887
(from bad to worse)

Posted by: nick at September 9, 2005 07:24 AM

 

 

nick. your link is an example of american bioethical forces impacting the market — the marketplace is not ethical. Kurzweil interview points out China's leading in engineering degrees with 220,000 in 2000 vs. 53,000 in US. this trend occurs in other countries with other technology-related degrees. US leads in the application of technology, but not increasing technical literacy.

imho, think US confused between the role of democracy, religion & capitalism.

Posted by: northanger at September 9, 2005 08:37 AM

 

 

hey nick is the air-pressure-driven car blowing anybody away in china?

as for the above, I am reminded of the late John Ray, inventor of 'body

electronics' who used to hammer on the 'fact' that what

one can't visualize, schematize and represent will never

be gotten a grip on and remain on eternal return mode no

matter how much the process demerits perpetuation, as in

the case of trauma. He shortened it to a formula

constrasting yet showing mutual catalycism of mental and

physical 'levels'.

The other way around one could say that all this figuring

(as above) in order to trace abstract and cerebrate, to

the extent it proceeds more and more emphatically to the

exclusion of as any observer necessarily needs to step

back a bit (merlin in the ice cube or crystal cocoon),

rather than participant, betrays and sterilizes our role

in it. This near, as I see it chronic situation abounds

and here especially since spellspinners get particularly

upset about rude awakening bubble burst projection pop

and all that. I plead guilty but my defenses are the iron

laws of returnity, wether favor is dis, return (to

po((w))der) we must and shall. Remember, the machine is only as baood as it's master.

Posted by: poetpiet at September 9, 2005 09:43 AM

 

 

return (to po((w))der) we will must and shall

Posted by: poetpiet at September 9, 2005 09:46 AM

 

 

nick - yes, these multiple, converging trends are dazzling (think I've got to revisit Greg Bear). IMHO, the selective pressure of market forces will prove crucial because they are a) already operating and b) they are not part of a 'disconnected' simulation in a lab - it's a fight to the death: get replicated and updated or die. Writing genetic algorithms to simulate natural selection and putting the machines through artificial trial and error tests is not going to be as decisive, irrevocable and final as natural selection in the markets. Natural selection can afford to be wasteful and ruthless - Lab research cannot.

The politics of The Singularity are also mind-boggling. The hard neo-Leninist left and the neo-Con right are going to find themselves in a bizarre humanist alliance on this issue (Moralism Reactivated). Singulatarians pursuing "Off-the-shelf genomics and cyborgian body-modification catalyse run-away posthumanism" will come to represent a new political enemy: biological traitors.

The left/right distinction stems from where people used to sit in the French National Assembly (an arrangement echoed later by the location of the Russian Provisional Government and the Petrograd Soviet). A new political map will have to be drawn if The Singualarity is to be 'fought'.

Posted by: sd at September 9, 2005 11:27 AM

 

 

from the BBC today:

'Proof' our brains are evolving
news.bbc.co.uk/1/hi/health/4222460.stm

Posted by: sd at September 9, 2005 04:16 PM

 

 

sd - the only thing I'd append to the politics issue you raise is a bet that the idiocy and incompetence of human political behaviour will undermine any possibility of responding to Singularity coherently or effectively. The nearer we get, the more people retreat into random gestural 'protest' and conservative rigidity - it's like watching a gaggle of lobotomized chimps trying to hold back a tsunami

Posted by: Nick at September 10, 2005 05:43 AM

 

 

northanger - viz engineering degrees etc - time is coming for America to choose between Christian conservatism (e.g. 'bioethics') and economic liberty - if it screws up (by going for Jesus) the Far East will bury it, everyone with any talent will leave for Pac Rim cybercities and make the future there

Posted by: Nick at September 10, 2005 05:47 AM

 

 

>>leave for Pac Rim cybercities and make the future there

this technogeek hitting various glass-ceilings nearly packed. coz i'm sure one of these cybercities needs help testing their own operating system.

Posted by: northanger at September 10, 2005 06:41 AM

 

 

sure they've got expat astrological crystal-divination groups too, so what are you waiting for?

Posted by: Nick at September 10, 2005 10:15 AM

 

 

who has better toilet paper?

Posted by: northanger at September 10, 2005 11:55 AM

 

 

reflecting on the text Posted by: Nick at September 8, 2005 10:29 AM

Is(n't) all argumentation self defeating in that is tries gain momentum, take all relevant aspects into account for the sake of convincing powder (and naively starts 'completing' the endlessly fractal derivative cascade only to find it ends in never never land at which point commercial pilot really takes over and leaves consciousness and conscience knocked out in stuporific apathy) and reach the stage/state of allways on hagglemoney? Wonder why things like harebrained space exploitation and that sort of scheme doesn't enter your row of bad examples?

Posted by: piet at September 10, 2005 12:18 PM

 

 

Y-shaped nanotubes are ready-made transistors

newscientist.com/channel/mech-tech/nanotechnology/dn7847

Posted by: sd at September 10, 2005 01:42 PM

 

 

this thread is all over the place man. sort out yo shit! esp this P guy - you is a hooligan bro, get real, time yo went home to bed

Posted by: Eggy Backflip at September 11, 2005 03:17 AM

 

 

Eggy Backflip - sort out P? Ha!

Posted by: Nick at September 11, 2005 11:25 AM

 

 

Post a comment:










Remember personal info?