Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > On 21/02/2008, John Ku <[EMAIL PROTECTED]> wrote: > > > By the way, I think this whole tangent was actually started by Richard > > misinterpreting Lanier's argument (though quite understandabl

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-20 Thread John Ku
On 2/20/08, Stan Nilsen <[EMAIL PROTECTED]> wrote: > > It seems that when philosophy is implemented it becomes like nuclear > physics e.g. break down all the things we essentially understand until > we come up with pieces, which we give names to, and then admit we don't > know what the names identi

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-18 Thread John Ku
On 2/18/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > By the way, Lanier's idea is not original. Hilary Putnam, John Searle, > Tim Maudlin, Greg Egan, Hans Moravec, David Chalmers (see the paper > cited by Kaj Sotola in the original thread - > http://consc.net/papers/rock.html) have all con

Re: [singularity] AI critique by Jaron Lanier

2008-02-17 Thread John Ku
On 2/17/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: > Nevertheless we can make similar reductions to absurdity with respect to > qualia, that which distinguishes you from a philosophical zombie. There is no > experiment to distinguish whether you actually experience redness when you see > a red o

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > If computation is multiply realizable, it could be seen as being > implemented by an endless variety of physical systems, with the right > mapping or interpretation, since anything at all could be arbitrarily > chosen to represent a tape

Re: Infinitely Unlikely Coincidences [WAS Re: [singularity] AI critique by Jaron Lanier]

2008-02-17 Thread John Ku
On 2/17/08, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: > In the final extrapolation of this idea it becomes clear that if any > computation can be mapped onto any physical system, the physical > system is superfluous and the computation resides in the mapping, an > abstract mathematical object

Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread John Ku
On 2/16/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > I would prefer to leave behind these counterfactuals altogether and > > try to use information theory and control theory to achieve a precise > > understanding of what it is for something to be the standard(s) in > > terms of which we are abl

Re: [singularity] AI critique by Jaron Lanier

2008-02-16 Thread John Ku
On 2/16/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: > I believe his target is the existence of consciousness. There are many proofs > showing that the assumption of consciousness leads to absurdities, which I > have summarized at http://www.mattmahoney.net/singularity.html > In mathematics, it sh

Re: [singularity] AI critique by Jaron Lanier

2008-02-15 Thread John Ku
On 2/15/08, Eric B. Ramsay <[EMAIL PROTECTED]> wrote: > > I don't know when Lanier wrote the following but I would be interested to > know what the AI folks here think about his critique (or direct me to a > thread where this was already discussed). Also would someone be able to > re-state his rain

Re: [singularity] "Friendly" question...

2007-05-27 Thread John Ku
ny topics. The fact that much disagreement persists because of thse sorts of reasons shouldn't make us despair or doubt that we have good methods for getting at the truth. John Ku - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Re: [singularity] "Friendly" question...

2007-05-27 Thread John Ku
try to overcome to whatever extent possible), then great, it sounds like we don't really have much of a disagreement, except perhaps on some details. John Ku On 5/27/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: It was not perhaps so simple as you are portraying it. There

Re: [singularity] "Friendly" question...

2007-05-26 Thread John Ku
lutely absurd! Yet it's frustrating how many people seem to make that sort of error. Our genes programmed us to have various direct concerns. A mother will for instance directly care about her offspring, not care about her offspring in order to promote the human species or her own genes. John Ku --

Re: [singularity] "Friendly" question...

2007-05-26 Thread John Ku
with conflicts in people's reasons. John Ku - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Re: [singularity] "Friendly" question...

2007-05-26 Thread John Ku
paper that will hopefully be fairly accessible to non-philosophers: http://www.umich.edu/~jsku/reasons.html John Ku - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

[singularity] Multiverse and Alien Singularities

2007-03-27 Thread John Ku
I argued previously that inflationary cosmology and its successes give us good reason to think there is probably a multiverse that spawns 10^37 *more* universes every second. I think Kurzweil has argued that there probably have not been any other singularities elsewhere in the universe already and

[singularity] Philanthropy & Singularity

2007-03-15 Thread John Ku
Does anyone know what Bill Gates thinks about the singularity? (Or for that matter, other great philanthropists.) On Kurzweil's "The Singularity is Near" (paperback edition), he has a blurb saying Kurzweil is "The best person I know at predicting the future of artificial intelligence." He also has

Re: [singularity] Scenarios for a simulated universe

2007-03-07 Thread John Ku
Shane, It still seems decidedly odd to me to call AIXI intelligent. For similar reasons, I wouldn't call a program that generates all possible strings of characters, sometimes randomly producing a literary masterpiece, artistic or creative. While I'm sympathetic to a functional account of these c

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread John Ku
Shane, Thanks for the thoughtful response. If something like infinite computation were feasible, I would agree with you that we should aim more at that than intelligence. I personally do have a very theoretical bent, but that does have its limits. :) However, it appears that infinite computation

Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread John Ku
On 3/5/07, Stathis Papaioannou <[EMAIL PROTECTED]> wrote: You seem to be equating intelligence with consciousness. Ned Block also seems to do this in his original paper. I would prefer to reserve "intelligence" for third person observable behaviour, which would make the Blockhead intelligent, a

Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread John Ku
On 3/4/07, Ben Goertzel <[EMAIL PROTECTED]> wrote: Richard, I long ago proposed a working definition of intelligence as "Achieving complex goals in complex environments." I then went through a bunch of trouble to precisely define all the component terms of that definition; you can consult the

Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-03 Thread John Ku
On 3/3/07, Charles D Hixson <[EMAIL PROTECTED]> wrote: Yes, I see no valid argument asserting that this is not a simulation fiction that some other entity is experiencing. And there's no guarantee that sometime soon he won't "put down the book". But this assumption yields no valid guide as to

Re: [singularity] No non-circular argument for deduction?

2007-03-02 Thread John Ku
be a he.) Ku http://www.umich.edu/~jsku On 3/2/07, gts <[EMAIL PROTECTED]> wrote: On Fri, 02 Mar 2007 03:25:58 -0500, John Ku <[EMAIL PROTECTED]> wrote: > Skeptics are fond of pointing out that no non-circular argument can be > given to support inductive reasoning. That is tr

Re: [singularity] Why We are Almost Certainly not in a Simulation

2007-03-02 Thread John Ku
ke to confuse a basic *rule of inference* like deduction or induction with another *premise* that needs to be justified. -Ku http://www.umich.edu/~jsku On 3/2/07, Mitchell Porter <[EMAIL PROTECTED]> wrote: >From: "John Ku" <[EMAIL PROTECTED]> >I actually think there

[singularity] Re: Why We are Almost Certainly not in a Simulation

2007-03-01 Thread John Ku
le simulating other stuff we don't observe at a higher level of description? I guess there's lots of tricky issues here. I'd better stop here and see if anyone else cares to try and make some headway. -Ku On 3/1/07, John Ku <[EMAIL PROTECTED]> wrote: Hi everyone! I just joi

[singularity] Why We are Almost Certainly not in a Simulation

2007-03-01 Thread John Ku
observers who share our evidence set about our history, evolution, etc., there will be many more universes in which we were the first civilization to evolve than in which we came significantly after some other civilization. John Ku Philosophy Graduate Student University of Michigan http://www.umich.edu/~