X-Message-Number: 30145
From: 
Date: Sun, 16 Dec 2007 10:59:24 EST
Subject: "singularity"

Content-Language: en

 
In  math, a "singularity" occurs where the independent variable, time, is 
finite but  a dependent variable becomes "infinite" (unbounded). The 

"singularity" of tech  advancement is more properly called a "spike"--a very 
large but not 
infinite  spurt. In current discussions, "singularity" is often used to refer 
to a  situation where Artificial Intelligence (AI) would result in computers  
dominating the world with their own agendas. 
Now  a few thoughts on this and the alleged danger of    singularity    AIs 
deciding  to eliminate humans. It   s not important in a cryonics context 

regardless, since  the resources of cryonicists are tiny compared to those of 
computer 
people  generally, but a little dose of realism may just possibly prevent some 
waste of  energy.or enervating pessimism.  
In  the third place, as many have pointed out, the promise or threat of 

Artificial  Intelligence seems still remote, a half century after a lot of big 
talk 
and  heady predictions.. Anyone not aware of this need only look at the 
current  pitiful performance of search engines, despite the market forces that 
ought to  provide plenty of incentive.  
In  the second place, the original human programmers will almost certainly 
want to  retain control or otherwise protect themselves against a possible 

monster. In  all probability, the computer will not be allowed to have feelings 
of 
its  own--even if that were possible, which it might not be-- but will be 

required to  work in conjunction with a human brain and under the control of the
human at all  times.  

In  the first place, the worriers just don   t seem to understand the nature of
    drives    or    motivations    in the AI context. Part of this may be a 

holdover from  Asimov   s laughable    laws    of robotics that supposedly might
determine 

a  computer   s behavior. For example, one of those    laws    says that a robot

cannot  harm a human or, by inaction, allow a human to come to harm. To imagine
that any  such rule could be programmed into an algorithm is just absurd. The 
robot would  immediately and always have to argue with itself about the 

meanings of the terms  and about the hierarchies of harm and would almost 
instantly 
have a nervous  breakdown. Likewise, if the computer were somehow at some 
point programmed to  seek (say) its own aggrandizement, this too would 
immediately encounter  essentially (probably) insuperable problems. 
As  for Kurzweil et al, in my opinion they make basic errors in sweeping 

under the  rug the question of putative computer consciousness. They just assume
that, at  some level of complexity, consciousness will "emerge" and the 

computer will  "wake up" and decide to do its own thing. Elsewhere I have 
explained 
in detail  why this is baloney.  
It  is true that IF a computer were sufficiently intelligent and motivated 
and  independent and capable of self-modification, it would be impossible to 
control,  because it would necessarily have communication, and it could 

accomplish  whatever it wanted by persuasion, however physically limited it 
might  
initially be. But, as previously noted, it would not be allowed  independence, 

instead being slaved to a human brain, i.e. made an extension of  the human, in
roughly the same way that our subconscious minds sometines do  detail work for 
us. (Yes, our subconscious minds sometimes betray us and undo  us, but that 

should be avoidable.) Incidentally, the "slaving" would not  necessarily involve
direct neural communication, with a chip in your head. It  could be something 
much simpler, such as a requirement in the program in certain  situations to 
pause and wait for external input. 
Robert   Ettinger



**************************************See AOL's top rated recipes 
(http://food.aol.com/top-rated-recipes?NCID=aoltop00030000000004)


 Content-Type: text/html; charset="UTF-8"

[ AUTOMATICALLY SKIPPING HTML ENCODING! ] 

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=30145