Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:
[snip]

But again, none of this touches upon Lanier's attempt to draw a bogus
conclusion from his thought experiment.



No external observer would ever be able to keep track of such a
fragmented computation and as far as the rest of the universe is
concerned there may as well be no computation.

This makes little sense, surely.  You mean that we would not be able to
interact with it?  Of course not:  the poor thing will have been
isolated from meanigful contact with the world because of the jumbled up
implementation that you posit.  Again, though, I see no relevant
conclusion emerging from this.

I cannot make any sense of your statement that as far as the rest of
the universe is concerned there may as well be no computation.  So we
cannot communicate with it anymore that should not be surprising,
given your assumptions.


We can't communicate with it so it is useless as far as what we
normally think of as computation goes. A rainstorm contains patterns
isomorphic with an abacus adding 127 and 498 to give 625, but to
extract this meaning you have to already know the question and the
answer, using another computer such as your brain. However, in the
case of an inputless simulation with conscious inhabitants this
objection is irrelevant, since the meaning is created by observers
intrinsic to the computation.

Thus if there is any way a physical system could be interpreted as
implementing a conscious computation, it is implementing the conscious
computation, even if no-one else is around to keep track of it.



Sorry, but I do not think your conclusion even remotely follows from the 
premises.


But beyond that, the basic reason that this line of argument is 
nonsensical is that Lanier's thought experiment was rigged in such a way 
that a coincidence was engineered into existence.


Nothing whatever can be deduced from an argument in which you set things 
up so that a coincidence must happen!  It is just a meaningless 
coincidence that a computer can in theory be set up to be (a) conscious 
and (b) have a lower level of its architecture be isomorphic to a rainstorm.


It is as simple as that.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-18 Thread Richard Loosemore

John K Clark wrote:

Matt Mahoney [EMAIL PROTECTED]


It seems to me the problem is
defining consciousness, not testing for it.


And it seems to me that beliefs of this sort are exactly the reason 
philosophy is in such a muddle. A definition of consciousness is not
needed, in fact unless you're a mathematician where they can be of some 
use, one can lead a full rich rewarding intellectually life without

having a good definition of anything. Compared with examples
definitions are of trivial importance.


On the contrary, in this case I have argued that it is exactly the lack 
of a clear definition of what consciousness is supposed to be, that 
causes so much of the problem of trying to explaining it.


Further, I have suggested that the C problem can be solved once we 
understand *why* we have so much trouble saying what it is.  I have 
given an explicit, complete explanation for what consciousness is, which 
starts out from a resolution of the definition-difficulty.


I note that Nick Humphrey has recently started to say something very 
similar.




Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-18 Thread John K Clark

Richard Loosemore [EMAIL PROTECTED]


it is exactly the lack of a clear definition
of what consciousness is supposed to be


And if we did have such a definition of consciousness I don't see how it
would help in the slightest in making an AI. The definition would be made
of words, and every one of those words would have their own definition
also made of words, and every one of those words would have their own
definition also made of words, and [...]

You get the idea, round and round we go. The thing that gets language
out of this endless loop is examples; we can point to a word and
something in the real world and say this word means that.

And I have no difficulty explaining what I mean when my mouth makes 
the sound consciousness; producing consciousness is, in my opinion

and almost certainly yours, the most importing thing I am doing at this
instant. I have no definition but I know exactly what those words mean
and I'll bet you do too. What more is needed for clear communication?

John K Clark




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-18 Thread John K Clark
And I will define consciousness just as soon as you define define. 

John K Clark 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Stathis Papaioannou
On 19/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 Sorry, but I do not think your conclusion even remotely follows from the
 premises.

 But beyond that, the basic reason that this line of argument is
 nonsensical is that Lanier's thought experiment was rigged in such a way
 that a coincidence was engineered into existence.

 Nothing whatever can be deduced from an argument in which you set things
 up so that a coincidence must happen!  It is just a meaningless
 coincidence that a computer can in theory be set up to be (a) conscious
 and (b) have a lower level of its architecture be isomorphic to a rainstorm.

I don't see how the fact something happens by coincidence is by itself
a problem. Evolution, for example, works by means of random genetic
mutations some of which just happen to result in a phenotype better
suited to its environment.

By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
cited by Kaj Sotola in the original thread -
http://consc.net/papers/rock.html) have all considered variations on
the theme. At the very least, this should indicate that the idea
cannot be dismissed as just obviously ridiculous and unworthy of
careful thought.




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread John Ku
On 2/18/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 By the way, Lanier's idea is not original. Hilary Putnam, John Searle,
 Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper
 cited by Kaj Sotola in the original thread -
 http://consc.net/papers/rock.html) have all considered variations on
 the theme. At the very least, this should indicate that the idea
 cannot be dismissed as just obviously ridiculous and unworthy of
 careful thought.

Yes, you've shown either that, or that even some occasionally
intelligent and competent philosophers sometimes take seriously ideas
that really can be dismissed as obviously ridiculous -- ideas which
really are unworthy of careful thought were it not for the fact that
pinpointing exactly why such ridiculous ideas are wrong is so often
fruitful (as in the Chalmers article).

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com