Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > On 2/20/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > > On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > > > > > By the way, I think this whole tangent was actually started by Richard > > > misinterpreting Lanier's argument (though

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > > > By the way, I think this whole tangent was actually started by Richard > > misinterpreting Lanier's argument (though quite understandably given > > Lanier's vagueness and unclarit

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stathis Papaioannou
On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > By the way, I think this whole tangent was actually started by Richard > misinterpreting Lanier's argument (though quite understandably given > Lanier's vagueness and unclarity). Lanier was not imagining the > amazing coincidence of a genuine

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
John Ku wrote: By the way, I think this whole tangent was actually started by Richard misinterpreting Lanier's argument (though quite understandably given Lanier's vagueness and unclarity). Lanier was not imagining the amazing coincidence of a genuine computer being implemented in a rainstorm, i

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stan Nilsen <[EMAIL PROTECTED]> wrote: > > It seems that when philosophy is implemented it becomes like nuclear > physics e.g. break down all the things we essentially understand until > we come up with pieces, which we give names to, and then admit we don't > know what the names identi

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread gifting
Quoting Vladimir Nesov <[EMAIL PROTECTED]>: On Feb 20, 2008 6:13 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: The possibility of mind uploading to computers strictly depends on functionalism being true; if it isn't then you may as well shoot yourself in the head as undergo a destructive

Re: [singularity] Definitions

2008-02-20 Thread Richard Loosemore
Samantha Atkins wrote: Richard Loosemore wrote: John K Clark wrote: And I will define consciousness just as soon as you define "define". Ah, but that is exactly my approach. Thus, the subtitle I gave to my 2006 conference paper was "Explaining Consciousness by Explaining That You Cannot Exp

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
Stathis Papaioannou wrote: On 20/02/2008, Eric B. Ramsay <[EMAIL PROTECTED]> wrote: During the late 70's when I was at McGill, I attended a public talk given by Feynman on quantum physics. After the talk, and in answer to a question posed from a member of the audience, Feynman said something

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Richard Loosemore
Stathis Papaioannou wrote: On 20/02/2008, Richard Loosemore <[EMAIL PROTECTED]> wrote: I am aware of some of those other sources for the idea: nevertheless, they are all nonsense for the same reason. I especially single out Searle: his writings on this subject are virtually worthless. I hav

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Stan Nilsen
Vladimir Nesov wrote: On Feb 20, 2008 6:13 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: The possibility of mind uploading to computers strictly depends on functionalism being true; if it isn't then you may as well shoot yourself in the head as undergo a destructive upload. Functionalism (i

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread Vladimir Nesov
On Feb 20, 2008 6:13 AM, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > > The possibility of mind uploading to computers strictly depends on > functionalism being true; if it isn't then you may as well shoot > yourself in the head as undergo a destructive upload. Functionalism > (invented, and la