X-Message-Number: 15955
Date: Tue, 27 Mar 2001 14:20:39 -0500
From: James Swayze <>
Subject: Who's future will it be?
References: <>

Cryonet Wrote:

> Message #15939
> From: 
> Date: Sun, 25 Mar 2001 06:23:45 EST
> Subject: Cryonics vs. God promise

<snipage>

> Our universe expands coutinuously and its observable limit recedes at the
> speed of light. At any time, a big mass may enter our horizon limit and
> produce anuniverse collapse: The Big Crunch.

<snipage>

> This is the
> first possible end of time.

<more snipage>

> Do you want to bet on 10^18 years? Well, this is not very long if we see it
> against eternity, on the other hand it seems very long as seen against
> cryonics alternative. How could two so different things could even put on the
> same choice list?


Yvan, you might enjoy the following story by Isaac Azimov. You might have 
already read it but if not here's the url:

http://www.kenobi.com.ar/question.htm

Enjoy!



_______________________________________________________________________________________________

> Message #15943
> Date: Sun, 25 Mar 2001 08:57:41 -0800
> From: Lee Corbin <>
> Subject: Re: Trust In All-Powerful Lords
>
> Eliezer S. Yudkowsky, by way of Sabine Atkins, wrote

<snipage>

> Alas, my question "What _exactly_ do you intend to do..." has not
> been answered specifically.  I won't ask a third time.  It's clear
> that formulating a candid answer is difficult for some reason.

It's there if you read between the lines. Make the logical inferences.

> So let me guess, based upon this last paragraph.  Should I, or
> any other entity refuse to be completely analysed by the AI or
> SI or whatever, and we persist in desiring to live outside the
> control of it, then we will be forceably brought to heel by
> whatever means that are required, including invasion and
> violent conquest.
>
> Later, after all the horrors and devastation of the war, and the
> rewriting of the Lee Corbin entity so that it is more compliant,
> the excuse will read something like "Well, what could you expect?
> The entity persisted in not trusting us.  Us, of all people!
> (Or "Me, of all entities!", if it is the Sysop or whatever speaking.)
> Don't blame us (me) for the war.  And any of the rest of you, who
> think that you can band together to resist assimilation, take what
> happened to Lee Corbin and his wealth as an object lesson.


Pretty close I'd say but it might be worse. What to do with those messy warm and
fuzzy intractable variables...what to do? Let's examine what Eliezer said or 
quoted (unsure which) excerpt below.

> "Beyond the human realms, I'll settle for that freedom which consists of
> nobody ever actually interfering with me - i.e., as long as the only
> entity(s) with the power to mess up my life are known Friendly ones.  If
> Friendliness is not absolutely knowable, then I don't want there to be
> more than one entity, to minimize the risk.


Sounds like Eliezer won't be happy unless he is by himself with his AI sysop 
entity. The only logical way given these parameters to bring risk to zero is not
allow variables, like warm-fuzzy outside-the-box variables, to exist at all. 
The AI has god powers so it can create new and tractable compliant companions a 
plenty, what need has he of us?

>  If there are zero entities
> who possess the *potential* to interfere with me - and the situation is
> symmetrical, so that there are many other entities whom nobody possesses
> the potential to interfere with - then entities can assemble, at will, the
> technological capability to interfere with me, and soon I'm staring down
> the barrel of a *lot* of unFriendly guns.


This is barely readable, and in fairness I'll add politely, at least by me, but 
the gist is the same as the above and belies some nervousness or fear of that 
which is not under direct control.


_______________________________________________________________________________________________

> Message #15944
> Date: Sun, 25 Mar 2001 09:04:47 -0800
> From: "H. Keith Henson" <>
> Subject: Re: about machine intelligence etc

<snipage>


> However, I would like to point out that humans *and* computers are 
respectfully subject to being infested with replicating information patterns, 
called memes in the human case and computer viruses in computers.  Most of the 
time these RIPs are helpful or at least harmless, but we all have seen that 
really harmful ones come along and convert nice people or useful machines into 
something quite harmful.


I would just like to add that in my humble and non expert opinion once a machine
AI becomes self aware or sentient, not only will it be a new species and 
possibly competative with us, it will also be subject not only viruses but memes
just as we. In fact viruses might be easier fended off then memes.


> Your suggestions as to how to prevent such infestations from spreading among 
AIs like foot and mouth disease would be highly appreciated.

Indeed, most.


_______________________________________________________________________________________________

> Message #15946
> Date: Sun, 25 Mar 2001 13:46:48 -0500
> From: Brian Atkins <>
> Subject: The reality of our unending precarious situation (Swayze)
> References: <>

<snipage>

> Not really. Actually we have been through this all before years ago... The
> problem here is that you have a very unrealistic view of what the near future
> holds. See below


You may think so and are entitled to your opinion but I believe my idea of the 
future is not too far from someone I'm sure you would respect much more than I. 
After all he's a published authority on the future and human/machine 
intelligence. Please read the following article, by ummm.... Ray Kurzwiel no 
less.

The Web Within Us
Minds and machines become one.


December 01, 1999 issue 
http://www.business2.com/content/magazine/indepth/1999/12/01/20503

Ray Kurzweil


"By the second half of the next century, there will be no clear distinction 
between human and machine intelligence.  Two things will allow this to happen.  
First, our biological brains will be enhanced by neural implants.  This has 
already begun.  Doctors use neural implants to counteract symptoms of 
Parkinson's disease, for instance, and neuroscientists from Emory University 
School of Medicine in

Atlanta recently placed an electrode into the brain of a paralyzed stroke victim
who now communicates using a PC."


> James I still don't have a clear understanding of what your "ideal future"
> looks like.


Read the above mentioned article for a fairly close idea of it. Most people here
understood it, perhaps you simply don't wish to? Is the idea of an overlord AI 
so appealing to you it clouds your percieving any other alternative at all?

> From what I can tell it consists of all people still living in
> biological bodies forever.


Certainly not forever but certainly much much longer than you and Eliezer would 
agree to.

> Not gonna happen. Once you can move your mind
> to a more robust substrate, death becomes less of a worry, and the ability
> to act without the possibility of retribution increases. In other words I

> can upload myself to an extremely small nanocomputer (perhaps 1cm^2 or 
smaller),
> make a 10000 copies of myself


I don't wish to exist soley binging around inside a box. And I'm going to find 
it amusing to watch your 10,000 selves fight over who is really you.

> and distribute them all throughout random rocks
> in the asteroid belt. Then I launch my war on Earth. And all this just gets
> worse as the intelligence of minds increase.


You've just said that even with your hoped for future extreme violence will be 
possible, in the box or not, eh?

> See you are completely missing the point (to be blunt). Our goal of having
> a Sysop is not to "enforce morals". It is to provide very basic (mostly
> physical) protection, the kind of protection that is required in a solar
> system where individuals have full power over such things as nanotechnology.
> 99.9999% of the people in the solar system may develop perfect morals in the
> future as you hope for, but it only takes one to turn the planet into a
> big blob of grey goo. Go read the science-fiction book Bloom to see what I
> mean.

You must have missed my small post about how to avoid the gray goo.

> How to you prevent disasters like this once these extreme technologies are
> available? When no one needs anyone else to survive?


Yeah if you have your way whoever minds the sysop can create for themselves 
their ideal world and do away with all the troublesome rest of humanity since as
you've said above "who needs em?".


In my vision of the future we won't lose any need for eachother or for 
companionship. It will in fact increase.

> Secondarily, how do you get around the fact that AI is an inevitability in
> the near-future world with near-infinite computing power?


I thought I pointed that out. To recap, augmentation is more likely in the near 
term and like Mr. Kurzweil says the AI will be part of us. Gee, that's what I 
said too. Go figure.

> In that kind of
> future, a teenager in his basement can evolve a superhuman evil AI overnight

Eliezer maybe?

> by mistake, and before anyone can do anything it's game over. The AI problem
> does not ever go away as long as we have computers capable of running one.
> This problem must be faced, and it must be gotten right the first time.

Yes my point exactly.

> Are you going to tell me that all technologies advanced beyond a certain
> point must be totally restricted ala Bill Joy?


Don't lump me with Bill Joy. I never once said outlaw it. Please read my posts 
more carefully.

> (if you say that, I don't
> think you can call yourself an Extropian)


I am extropian by means of the definition that I believe in slowing Human 
Entropy.

> Even if you think that, how do
> you propose to make it real?


I don't personally propose to make anything happen. All I can do is promote 
discussion. I am not against AI. I am against trying to make it our god. I am 
against separating it from humanity.

> It would require a Big Brother-esque situation.
>
> If you say that you want a more anarchic future where everyone is totally
> "free" to do what they want, how do you propose to prevent the eventual
> disaster?

Perhaps Ray Kurzweil can help here.


"Can the pace of technological progress continue to speed up indefinitely?  Will
we not reach a point where humans are unable to think fast enough to keep up 
with it?  With regard to unenhanced humans, clearly so.  But what would a 
million scientists, each a thousand times more intelligent than human scientists
today, and each operating a thousand times faster than contemporary humans 
(because the

information processing in their nonbiological, Web-based brains is faster) 
accomplish?" --Ray Kurzweil


> >From what we can tell there are two supergoals: 1) as much freedom as 
possible
> for everyone, and 2) total safety for everyone. The best way to balance
> these goals so that you provide as much as possible of both is a Sysop.


And just who has the ear of the sysop? Who can we place in the evil toady role 
whispering diabolical memes to the sysop god?

> This can be condensed down to one supergoal: Friendlyness


I'll trust humanity however painful the growing pains to become wise enough to 
govern ourselves. It's time to grow up  and grow out of our need for parental 
super entities.

> And what about the individuals who refuse treatment?


What about individuals that refuse to obey the sysop? Please consider the 
economics of augmentation. People will take the treatment in order to gain 
advantages, or to not be left out, or a myriad other reasons people choose to 
improve themselves today. Consider the coming genetic enhancing that people will
choose. More capable minds than mine have worked out how this will dessiminate.
I see it as not

much different from intelligence augmentation. But again if you won't take my 
word for it just look around. I did and found Ray's article just today.

> Are you going to force
> everyone to be treated just so you can feel safe? Remember, in the future it
> only takes one madman to cause BIG trouble.

Will all the refusers of the sysop need to be eliminated?

> It not about bodies, it is about minds.

Back to binging around inside a box again are we? Hope no one sits on it. Oops!

> Minds with warm fuzzy thinking that
> have evolved to think in certain ways that could be very dangerous at this
> point in our species history.


You can't imagine our thinking powers becoming powerful enough and wise enough 
to simply grow up?

> Humans almost wiped themselves out with nukes,
> do you want to tell me that we will do better with even more advanced and
> more _distributed_ technologies?
>

> > violent--Like I said before I believe reasons for violence will deminish if 
not > entirely disappear.

> > If we are linked as I described we would be able to instantly upload to the 
mutual > network any

> > image of violence being done to us and the perpetrator's identities. Acts of
> violence would be
> > difficult to hide and too costly personally to commit.
>
> So when your body is infected with a microscopic grey goo nanobot, you can
> transmit images to all your friends of your hand getting eaten. Violence
> in the future is not via guns, fists, or other silly stuff.

Again some help from Ray.


"Of course, there will be great concern regarding who's controlling the 
nanobots, and over who the nanobots may be talking to.  Organizations such as 
governments or extremist groups or just clever individuals could put trillions 
of undetectable nanobots in the water or food supply.  These "spy" nanobots 
could then monitor, influence, and even control our thoughts and actions.  We 
won't be defenseless,

however.  Just as we have virus scanning software today, we will make use of 
patrol nanobots that search for (and destroy) unauthorized nanobots in our 
brains and bodies." --Ray Kurzweil

> James the majority of the people on this planet seem to believe in some kind
> of supreme being, even though there is no proof. People believe in stupid
> stuff, they form cults,

Believing in an AI god is cultish in my opinion.

> and otherwise act in confused ways. Are you going to
> force everyone in your ideal future to become unconfused?


You have me all wrong. The question can be turned back at you. And please read 
more carefully how we are already naturally progressing toward our augmented 
superHUMAN future.

> It is unlikely this will cease to be a concern in the next 10 to 50 years,
> which is the most critical period of all of human history. Get real

10 to 50? I don't recall giving so short a time to develop.

> Think of your favorite dictators

Only concerned with one presently.

> Do you still believe after reading this that there is zero chance of a
> human/posthuman causing an apocalypse in the near future if there are no
> protections? If so, you are living in a fantasy world. Why not go out and
> hand all your neighbors nuclear weapons right now then?

Again I'll defer to Ray Kurzweil. Please read above mentioned article.

> Well I am sorry that the future isn't what you wished for. Reality does seem
> to have a habit of intruding...

I think reality will favor my position but we'll just have to wait and see.

> Plan for the worst, and all that...
>
> Personally I want to live forever, and if I end up getting killed off by some
> nanotech accident or attack I would be extremely pissed off (right before I
> died).


Personally if I found myself assimilated unto the whims of the AI Sysop god I'd 
be really pissed off and compelled to become a bug in it's program.

> Oh give the parental metaphor a rest ok? The sysop scenario is more akin to

> creating an operating system for the Universe. You can do everything you want,
> except the short list of evil actions, which will literally be impossible.
> The sysop will not appear in front of you and chastise you for trying to
> shoot an Amish farmer on Old Earth, your gun simply will fail to fire when
> you pull the trigger.


And how else could this be achieved other than, the farmer and the gun an I will
have to be simulants. No thanks I won't be entering the box with you. You know 
that old poster named "Defiance", with the mouse flipping off the eagle about to
grab him? Got that picture? I'm that mouse and that's what I really feel about 
controlling AI's.


_______________________________________________________________________________________________

> Message #15947
> Date: Sun, 25 Mar 2001 19:33:31 -0500 (CDT)
> From: Eivind Berge <>
> Subject: Re: Friendly AI
>
> So Eliezer Yudkowsky is now working on a totalitarian "Friendly AI."
> Obviously at some point he will have to fight off or subdue those of us,
> friendly or otherwise, who don't want to submit to such an entity. But
> before we need to band together for protection, hopefully he will be
> destroyed by the CIA, FBI, or the like. Perhaps they are watching him
> already;

Shhh, if you warn him he'll go underground. Hehe ;)

James
--
Some of our views are spacious
some are merely space--RUSH

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=15955