Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > On 2/20/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > > On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > > > > > By the way, I think this whole tangent was actually started by Richard > > > misinterpreting Lanier's argument (though

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > > > By the way, I think this whole tangent was actually started by Richard > > misinterpreting Lanier's argument (though quite understandably given > > Lanier's vagueness and unclarit

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > By the way, I think this whole tangent was actually started by Richard > misinterpreting Lanier's argument (though quite understandably given > Lanier's vagueness and unclarity). Lanier was not imagining the > amazing coincidence of a genuine

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
John Ku wrote: By the way, I think this whole tangent was actually started by Richard misinterpreting Lanier's argument (though quite understandably given Lanier's vagueness and unclarity). Lanier was not imagining the amazing coincidence of a genuine computer being implemented in a rainstorm, i

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stan Nilsen <[EMAIL PROTECTED]> wrote: > > It seems that when philosophy is implemented it becomes like nuclear > physics e.g. break down all the things we essentially understand until > we come up with pieces, which we give names to, and then admit we don't > know what the names identi

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread gifting
Quoting Vladimir Nesov <[EMAIL PROTECTED]>: On Feb 20, 2008 6:13 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: The possibility of mind uploading to computers strictly depends on functionalism being true; if it isn't then you may as well shoot yourself in the head as undergo a destructive

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
Stathis Papaioannou wrote: On 20/02/2008, Eric B. Ramsay <[EMAIL PROTECTED]> wrote: During the late 70's when I was at McGill, I attended a public talk given by Feynman on quantum physics. After the talk, and in answer to a question posed from a member of the audience, Feynman said something

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
Stathis Papaioannou wrote: On 20/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: I am aware of some of those other sources for the idea: nevertheless, they are all nonsense for the same reason. I especially single out Searle: his writings on this subject are virtually worthless. I hav

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stan Nilsen
Vladimir Nesov wrote: On Feb 20, 2008 6:13 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: The possibility of mind uploading to computers strictly depends on functionalism being true; if it isn't then you may as well shoot yourself in the head as undergo a destructive upload. Functionalism (i

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Vladimir Nesov
On Feb 20, 2008 6:13 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > > The possibility of mind uploading to computers strictly depends on > functionalism being true; if it isn't then you may as well shoot > yourself in the head as undergo a destructive upload. Functionalism > (invented, and la

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Stathis Papaioannou
On 20/02/2008, Eric B. Ramsay <[EMAIL PROTECTED]> wrote: > During the late 70's when I was at McGill, I attended a public talk given by > Feynman on quantum physics. After the talk, and in answer to a question posed > from a member of the audience, Feynman said something along the lines of :" I

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Eric B. Ramsay
During the late 70's when I was at McGill, I attended a public talk given by Feynman on quantum physics. After the talk, and in answer to a question posed from a member of the audience, Feynman said something along the lines of :" I have here in my pocket a prescription from my doctor that forbi

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Stathis Papaioannou
On 20/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: > I am aware of some of those other sources for the idea: nevertheless, > they are all nonsense for the same reason. I especially single out > Searle: his writings on this subject are virtually worthless. I have > argued with Searle t

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Richard Loosemore
Stathis Papaioannou wrote: On 19/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: Sorry, but I do not think your conclusion even remotely follows from the premises. But beyond that, the basic reason that this line of argument is nonsensical is that Lanier's thought experiment was rigged i

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-19 Thread Stathis Papaioannou
On 19/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > Yes, you've shown either that, or that even some occasionally > intelligent and competent philosophers sometimes take seriously ideas > that really can be dismissed as obviously ridiculous -- ideas which > really are unworthy of careful thought w

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread John Ku
On 2/18/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > By the way, Lanier's idea is not original. Hilary Putnam, John Searle, > Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper > cited by Kaj Sotola in the original thread - > http://consc.net/papers/rock.html) have all con

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Stathis Papaioannou
On 19/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Sorry, but I do not think your conclusion even remotely follows from the > premises. > > But beyond that, the basic reason that this line of argument is > nonsensical is that Lanier's thought experiment was rigged in such a way > that a

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread Richard Loosemore
Stathis Papaioannou wrote: On 18/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: [snip] But again, none of this touches upon Lanier's attempt to draw a bogus conclusion from his thought experiment. No external observer would ever be able to keep track of such a fragmented computation an

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 18/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: > The last statement you make, though, is not quite correct: with a > jumbled up sequence of "episodes" during which the various machines were > running the brain code, he whole would lose its coherence, because input > from the world wo

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: When people like Lanier allow themselves the luxury of positing infinitely large computers (who else do we know who does this? Ah, yes, the AIXI folks), they can make infinitely unlikely coincidences happen. It is a commonl

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Richard Loosemore
Stathis Papaioannou wrote: On 17/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: The first problem arises from Lanier's trick of claiming that there is a computer, in the universe of all possible computers, that has a machine architecture and a machine state that is isomorphic to BOTH the

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: > When people like Lanier allow themselves the luxury of positing > infinitely large computers (who else do we know who does this? Ah, yes, > the AIXI folks), they can make infinitely unlikely coincidences happen. It is a commonly accepted practi

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 18/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > Sure, pretty much anything could be used as a symbol to represent > anything else, but the representing would consist in the network of > causal interactions that constitute the symbol manipulation, not in > the symbols themselves. (And certainly

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > If computation is multiply realizable, it could be seen as being > implemented by an endless variety of physical systems, with the right > mapping or interpretation, since anything at all could be arbitrarily > chosen to represent a tape

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 17/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > Can you clarify this? What do you mean by "any computation can be > mapped onto any physical system"? I take it to be uncontroversial that > computations are multiply realizable or can be implemented by > different physical substrates but I don't

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > In the final extrapolation of this idea it becomes clear that if any > computation can be mapped onto any physical system, the physical > system is superfluous and the computation resides in the mapping, an > abstract mathematical object

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread Stathis Papaioannou
On 17/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: > The first problem arises from Lanier's trick of claiming that there is a > computer, in the universe of all possible computers, that has a machine > architecture and a machine state that is isomorphic to BOTH the neural > state of a bra

Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-16 Thread Richard Loosemore
Stathis Papaioannou wrote: On 17/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: Lanier's rainstorm argument is spurious nonsense. That's the response of most functionalists, but an explanation as to why it is spurious nonsense is needed. And some such as Hans Moravec have actually conc