Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou [EMAIL PROTECTED] wrote:

 If computation is multiply realizable, it could be seen as being
 implemented by an endless variety of physical systems, with the right
 mapping or interpretation, since anything at all could be arbitrarily
 chosen to represent a tape, a one, a zero, or whatever.

Sure, pretty much anything could be used as a symbol to represent
anything else, but the representing would consist in the network of
causal interactions that constitute the symbol manipulation, not in
the symbols themselves. (And certainly not in anyone having to be
around to understand the machinery of symbol manipulation going on.)

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re : Re : Re : Re : Re : [singularity] Quantum resonance btw DNA strands?

2008-02-17 Thread Bruno Frandemiche
bonjour à tous
for info
http://xxx.lanl.gov/ftp/arxiv/papers/0802/0802.1835.pdf

http://xxx.lanl.gov/PS_cache/arxiv/pdf/0711/0711.1366v1.pdf

cordialement votre
bruno

- Message d'origine 
De : Ben Goertzel [EMAIL PROTECTED]
À : singularity@v2.listbox.com
Envoyé le : Mercredi, 6 Février 2008, 5h48mn 03s
Objet : Re: Re : Re : Re : Re : [singularity] Quantum resonance btw DNA strands?

Hi Bruno,

 effectively,my commentary is very short so excuse-my(i drive my pc with my
 eyes
 because i am a a.l.s with tracheo and gastro and i was a speaker,not a
 writer and it's difficult)

Well that is certainly a good reason for your commentaries being short!

 hello ben
 ok ,i stop,no problem
 i am thinking mcfadden'theory was possible right because of
 wave-matter-structure and
 no-particle-matter-structure

Certainly the wave nature of matter is a necessary prerequisite for
McFadden's theory to be correct -- but that's already built into quantum
mechanics, right?

The question is whether proteins really function as macroscopic quantum
systems, in the way that McFadden suggests.  They may or may not, but I
don't think the answer is obvious from the wave nature of matter...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


  
_ 
Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail 
http://mail.yahoo.fr

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore

Stathis Papaioannou wrote:

On 17/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:


The first problem arises from Lanier's trick of claiming that there is a
computer, in the universe of all possible computers, that has a machine
architecture and a machine state that is isomorphic to BOTH the neural
state of a brain at a given moment, and also isomorphic to the state of
a particular rainstorm at a particular moment.


In the universe of all possible computers and programs, yes.


This is starting to be rather silly because the rainstorm and computer
then diverge in their behavior in the next tick of the clock. Lanier
then tries to persuade us, with some casually well chosen words, that he
can find a computer that will match up with the rainstorm AND the brain
for a few seconds, or a few minutes ... or ... how long?  Well, if he
posits a large enough computer, maybe the whole lifetime of that brain?

The problem with this is that what his argument really tells us is that
he can imagine a quasi-infinitely large, hypothetical computer that just
happens to be structured to look like (a) the functional equivalent of a
particular human brain for an indefinitely long period of time (at least
the normal lifetime of that human brain), and, coincidentally, a
particular rainstorm, for just a few seconds or minutes of the life of
that rainstorm.

The key word is coincidentally.


There is no reason why it has to be *the same* computer from moment to
moment. If your mind were uploaded to a computer and your physical
brain died, you would experience continuity of consciousness (or if
you prefer, the illusion of continuity of consciousness, which is just
as good) despite the fact that there is a gross physical discontinuity
between your brain and the computer. You would experience continuity
of consciousness even if every moment were implemented on a completely
different machine, in a completely different part of the universe,
running in a completely jumbled up order.


Some of this I agree with, though it does not touch on the point that I 
was making, which was that Lanier's argument was valueless.


The last statement you make, though, is not quite correct:  with a 
jumbled up sequence of episodes during which the various machines were 
running the brain code, he whole would lose its coherence, because input 
from the world would now be randomised.


If the computer was being fed input from a virtual reality simulation, 
that would be fine.  It would sense a sudden change from real world to 
virtual world.


But again, none of this touches upon Lanier's attempt to draw a bogus 
conclusion from his thought experiment.




No external observer would ever be able to keep track of such a
fragmented computation and as far as the rest of the universe is
concerned there may as well be no computation.


This makes little sense, surely.  You mean that we would not be able to 
interact with it?  Of course not:  the poor thing will have been 
isolated from meanigful contact with the world because of the jumbled up 
implementation that you posit.  Again, though, I see no relevant 
conclusion emerging from this.


I cannot make any sense of your statement that as far as the rest of 
the universe is concerned there may as well be no computation.  So we 
cannot communicate with it anymore that should not be surprising, 
given your assumptions.



But if the computation
involves conscious observers in a virtual reality, why should they be
any less conscious due to being unable to observe and interact with
the substrate of their implementation?


No reason at all!  They would be conscious.  Isaac Newton could not 
observe and interact with the substrate of his implementation, without 
making a hole in his skull that would have killed his brain ... but that 
did not have any bearing on his consciousness.



In the final extrapolation of this idea it becomes clear that if any
computation can be mapped onto any physical system, the physical
system is superfluous and the computation resides in the mapping, an
abstract mathematical object.


This is functionalism, no?  I am not sure if you are disagreeing with 
functionalism or supporting it.  ;-)


Well, the computation is not the implemenatation, for sure, but is it 
appropriate to call it an abstract mathematical mapping?



This leads to the idea that all
computations are actually implemented in a Platonic reality, and the
universe we observe emerges from that Platonic reality, as per eg. Max
Tegmark and in the article linked to by Matt Mahoney:


I don't see how this big jump follows.  I have a different 
interpretation that does not need Platonic realities, so it looks like 
a non-sequiteur to me.




http://www.mattmahoney.net/singularity.html


I ind most of what Matt says in this article to be incoherent. 
Assertions pulled out of thin air and citing of unjustifiable claims 
made by others as if they were god-sent truth.



Richard Loosemore


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 When people like Lanier allow themselves the luxury of positing 
 infinitely large computers (who else do we know who does this?  Ah, yes, 
 the AIXI folks), they can make infinitely unlikely coincidences happen.

It is a commonly accepted practice to use Turing machines in proofs, even
though we can't actually build one.  Hutter is not proposing a universal
solution to AI.  He is proving that it is not computable.  Lanier is not
suggesting implementing consciousness as a rainstorm.  He is refuting its
existence.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku [EMAIL PROTECTED] wrote:

 On 2/16/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
   I would prefer to leave behind these counterfactuals altogether and
   try to use information theory and control theory to achieve a precise
   understanding of what it is for something to be the standard(s) in
   terms of which we are able to deliberate. Since our normative concepts
   (e.g. should, reason, ought, etc) are fundamentally about guiding our
   attitudes through deliberation, I think they can then be analyzed in
   terms of what those deliberative standards prescribe.
 
  I agree.  I prefer the approach of predicting what we *will* do as opposed
 to
  what we *ought* to do.  It makes no sense to talk about a right or wrong
  approach when our concepts of right and wrong are programmable.
 
 I don't quite follow. I was arguing for a particular way of analyzing
 our talk of right and wrong, not abandoning such talk. Although our
 concepts are programmable, what matters is what follows from our
 current concepts as they are.
 
 There are two main ways in which my analysis would differ from simply
 predicting what we will do. First, we might make an error in applying
 our deliberative standards or tracking what actually follows from
 them. Second, even once we reach some conclusion about what is
 prescribed by our deliberative standards, we may not act in accordance
 with that conclusion out of weakness of will.

It is the second part where my approach differs.  A decision to act in a
certain way implies right or wrong according to our views, not the views of a
posthuman intelligence.  Rather I prefer to analyze the path that AI will
take, given human motivations, but without judgment.  For example, CEV favors
granting future wishes over present wishes (when it is possible to predict
future wishes reliably).  But human psychology suggests that we would prefer
machines that grant our immediate wishes, implying that we will not implement
CEV (even if we knew how).  Any suggestion that CEV should or should not be
implemented is just a distraction from an analysis of what will actually
happen.

As a second example, a singularity might result in the extinction of DNA based
life and its replacement with a much faster evolutionary process.  It makes no
sense to judge this outcome as good or bad.  The important question is the
likelihood of this occurring, and when.  In this context, it is more important
to analyze the motives of people who would try to accelerate or delay the
progression of technology.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:
When people like Lanier allow themselves the luxury of positing 
infinitely large computers (who else do we know who does this?  Ah, yes, 
the AIXI folks), they can make infinitely unlikely coincidences happen.


It is a commonly accepted practice to use Turing machines in proofs, even
though we can't actually build one.


So?  That was not the practice that I condemned.

My problem is with people like Hutter or Lanier using thought 
experiments in which the behavior of quasi-infinite computers is treated 
as if it were a meaningful thing in the real universe.


There is a world of difference between that and using Turing machines in 
proofs.




Hutter is not proposing a universal
solution to AI.  He is proving that it is not computable.


He is doing nothing of the sort.  As I stated in the quote above, he is 
drawing a meaningless conclusion by introducing a quasi-infinite 
computation into his proof:  when people try to make claims about the 
real world (i.e. claims about what artificial intelligence is) by 
postulating machines with quasi-infinite amounts of computation going on 
inside them, they can get anything to happen.



Lanier is not
suggesting implementing consciousness as a rainstorm.  He is refuting its
existence.


And you missed what I said about Lanier, apparently.

He refuted nothing.  He showed that with a quasi-infinite computer in 
his thought experiment, he can make a coincidence happen.


Big deal.



Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread John Ku
On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Nevertheless we can make similar reductions to absurdity with respect to
 qualia, that which distinguishes you from a philosophical zombie.  There is no
 experiment to distinguish whether you actually experience redness when you see
 a red object, or simply behave as if you do.  Nor is there any aspect of this
 behavior that could not (at least in theory) be simulated by a machine.

You are relying on a partial conceptual analysis of qualia or
consciousness by Chalmers that maintains that there could be an exact
physical duplicate of you that is not conscious (a philosophical
zombie). While he is in general a great philosopher, I suspect his
arguments here ultimately rely too much on moving from, I can create
a mental image of a physical duplicate and subtract my image of
consciousness from it, to therefore, such things are possible.

At any rate, a functionalist would not accept that analysis. On a
functionalist account, consciousness would reduce to something like
certain representational activities which could be understood in
information processing terms. A physical duplicate of you would have
the same information processing properties, hence the same
consciousness properties. Once we understand the relevant properties
it would be possible to test whether something is conscious or not by
seeing what information it is or is not capable of processing. It is
hard to test right now because we have at the moment only very
incomplete conceptual analyses.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 18/02/2008, Richard Loosemore [EMAIL PROTECTED] wrote:

 The last statement you make, though, is not quite correct:  with a
 jumbled up sequence of episodes during which the various machines were
 running the brain code, he whole would lose its coherence, because input
 from the world would now be randomised.

 If the computer was being fed input from a virtual reality simulation,
 that would be fine.  It would sense a sudden change from real world to
 virtual world.

The argument that is the subject of this thread wouldn't work if the
brain simulation had to interact with the world at the level of the
substrate it is being simulated on. However, it does work if you
consider an inputless virtual environment with conscious inhabitants.
Suppose you are now living in such a simulation. From your point of
view, today is Monday and yesterday was Sunday. Do you have any
evidence to support the belief that Sunday was was actually run
yesterday in the real world, or that it was run at all? The simulation
could have been started up one second ago, complete with false
memories of Sunday. Sunday may not actually be run until next year,
and the version of you then will have no idea that the future has
already happened.

 But again, none of this touches upon Lanier's attempt to draw a bogus
 conclusion from his thought experiment.


  No external observer would ever be able to keep track of such a
  fragmented computation and as far as the rest of the universe is
  concerned there may as well be no computation.

 This makes little sense, surely.  You mean that we would not be able to
 interact with it?  Of course not:  the poor thing will have been
 isolated from meanigful contact with the world because of the jumbled up
 implementation that you posit.  Again, though, I see no relevant
 conclusion emerging from this.

 I cannot make any sense of your statement that as far as the rest of
 the universe is concerned there may as well be no computation.  So we
 cannot communicate with it anymore that should not be surprising,
 given your assumptions.

We can't communicate with it so it is useless as far as what we
normally think of as computation goes. A rainstorm contains patterns
isomorphic with an abacus adding 127 and 498 to give 625, but to
extract this meaning you have to already know the question and the
answer, using another computer such as your brain. However, in the
case of an inputless simulation with conscious inhabitants this
objection is irrelevant, since the meaning is created by observers
intrinsic to the computation.

Thus if there is any way a physical system could be interpreted as
implementing a conscious computation, it is implementing the conscious
computation, even if no-one else is around to keep track of it.



-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread Matt Mahoney

--- John Ku [EMAIL PROTECTED] wrote:

 On 2/17/08, Matt Mahoney [EMAIL PROTECTED] wrote:
 
  Nevertheless we can make similar reductions to absurdity with respect to
  qualia, that which distinguishes you from a philosophical zombie.  There
 is no
  experiment to distinguish whether you actually experience redness when you
 see
  a red object, or simply behave as if you do.  Nor is there any aspect of
 this
  behavior that could not (at least in theory) be simulated by a machine.
 
 You are relying on a partial conceptual analysis of qualia or
 consciousness by Chalmers that maintains that there could be an exact
 physical duplicate of you that is not conscious (a philosophical
 zombie). While he is in general a great philosopher, I suspect his
 arguments here ultimately rely too much on moving from, I can create
 a mental image of a physical duplicate and subtract my image of
 consciousness from it, to therefore, such things are possible.

My interpretation of Chalmers is the opposite.  He seems to say that either
machine consciousness is possible or human consciousness is not.

 At any rate, a functionalist would not accept that analysis. On a
 functionalist account, consciousness would reduce to something like
 certain representational activities which could be understood in
 information processing terms. A physical duplicate of you would have
 the same information processing properties, hence the same
 consciousness properties. Once we understand the relevant properties
 it would be possible to test whether something is conscious or not by
 seeing what information it is or is not capable of processing. It is
 hard to test right now because we have at the moment only very
 incomplete conceptual analyses.

It seems to me the problem is defining consciousness, not testing for it. 
What computational property would you use?  For example, one might ascribe
consciousness to the presence of episodic memory.  (If you don't remember
something happening to you, then you must have been unconscious).  But in this
case, any machine that records a time sequence of events (for example, a chart
recorder) could be said to be conscious.  Or you might ascribe consciousness
to entities that learn, seek pleasure, and avoid pain.  But then I could write
a simple program like http://www.mattmahoney.net/autobliss.txt with these
properties.  It seems to me that any other testable property would have the
same problem.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com