X-Message-Number: 14944
From: "Pat Clancy" <>
Date: Fri, 17 Nov 2000 11:42:39 -0800
Subject: Re: Simulating People and Animals

Lee Corbin wrote:

> >There's really no reason to think that the functioning
> >of the mind is an _algorithm_, which is what is required
> >to make it implementable as a computer program.
> 
> According to the technical definition of "algorithm", algorithms
> halt!

An algorithm is just a list of instructions that guides the action of the 
machine, it may halt or not halt. I can write a one-line program that never 

halts; if you prefer not to call this an algorithm that is really just 
semantics. 
My point is that the mind is not an algorithm, either of the halting or non-
halting variety.

> 
> An operating system is a good example of a program that 
> technically isn't an algorithm (by this definition).  An
> OS offers you a response for each possible input you provide
> it, but it's capable of much more.  It has a long memory,
> so to speak. I'm sure that you can easily imagine an interactive
> robot which obeys commands and perhaps even gives some uppity
> back talk from time to time, just the way that operating systems
> seem to.  Yet it still seems to you that in the course of millions
> of years of development, computer programs can never imitate humans
> 
> in any way whatsoever?


An operating system is a program; if you prefer that term to "algorithm" that's
fine, I see no difference in the main argument. There are light-years of 
difference between an "uppity" robot that has a mind, and an operating 
system that gives you cryptic but entirely pre-determined error messages. 
And yes a time frame of millions of years would make no difference IMHO - I 
think that Turing machines will be abandoned as a basis for AI long before 
then, hopefully something more appropriate will take their place.

> 
> Please imagine a life-like robot that responds to all of the
> world's stimuli unpredictably. (Unpredictable, of course, unless
> you run a simulation of the same program.) The burden is still
> upon you to say why a tremendous amount of emergent behavior from
> an extremely complex set of programs cannot mimic animals or humans.

No actually the burden is on you to show why any set of programs _should_ 
be able to do this. So far no one has shown it.

> You bring up Dreyfus' old book; believe me, many of us on the other
> side, e.g. Dennett, Hofstadter, and many many others do not find
> his arguments convincing.

Since you bring up Dennett, I will digress - I thought the title of his book, 
"Consciousness Explained", was about the most pretentious and misleading 
I've ever seen - if there was anything that book _didn't_ do it was "explain" 
consciousness. I really thought I should get my money back for the book, 
the title was making false claims. It was one circular argument on top of 
another. BTW I do not mean to imply this reflects on your own argument at 
all.

> And as for Penrose, as wonderful as his
> books are (I am perpetually re-reading them), his views about
> mysterious goings on of microtubules and what not, have been panned
> by numerous people, e.g., Ralph Merkle.  As Marvin Minsky said, to
> paraphrase, "Roger Penrose is so intelligent that there are only two
> things in the world that he does not in principle thoroughly
> understand. One is quantum mechanics, and the other is consciousness.
> So who can blame him for thinking that they must be somehow
> intimately related?" 

I don't see how name-calling from members of the AI community sheds light 
on anything. In fact it's somewhat amusing to witness the level of hysteria 
and defensiveness engendered amonst the strong-AI proponents, who have 
been losing increasing amounts of credibility over the years, as their claims 
have turned into smoke. As for Penrose, I was not referring to his specific 
claims about microtubules etc., but rather his critique of AI on computers. 
As for his thoughts on consciousness, I think he's walking in the dark just 
like everyone else (although he is smarter than many others who take on this 
topic).

> 
> Can you guess what would be the give-away difference?  Again, after
> millions of years of development of programs by humans and other
> programs, what tell-tale clue could still be present?  Must it
> write a sonnet (to use one of Turing's examples)?  Why in principle
> can't a robot sing and dance?  Even after just fifty years of
> development, they're very good at some things.  Why is there some
> sort of vague barrier that forever prevents them from doing other
> things?

I think computers can do maybe the first 80% of the behaviour simulation 
task, and then as they try to conquer that last and hardest 20%, the 

difficulties grow _exponentially_ until they become, in effect, infinite (for a

Turing machine). I think that when a correct substrate for an artificial mind is
found, it will possibly be that last 20% which was impossible for a Turing 
machine that will be the _easiest_ for the new whatever-it-is.

Pat Clancy

Rate This Message: http://www.cryonet.org/cgi-bin/rate.cgi?msg=14944