On Tue, Jun 14, 2005 at 09:26:54AM -0700, Brian Holtz wrote:
> Hi everyone (in this world and all relevantly similar ones :-),
>
> I like the solution to the Induction / Dragon / Exploding Cow problem that I
> see in work by Malcolm, Standish, Tegmark, and Schmidhuber. So I forwarded
> references
John Mikes wrote:
> ... Those posts were accessible (for me) that started with a
> statement of the writer and not a lot of copies with some reply-lines
> interjected. I know (and like to use) to copy the phrases to reply to but
> even in a 2-week archiving it turns sour. After the first 30-40 post
I appreciate your difficulty - I have the same problem whenever someone
sends a pure HTML email - I have to navigate down several layers of
menus, and the result is a barely human readable message.
I also understand Microsoft Outlook has trouble understanding RFC
compliant signed emails, hence I h
On Tue, Jun 14, 2005 at 04:39:57PM +0200, Bruno Marchal wrote:
>
>
> OK but it can be misleading (especially in advanced stuff!). neither a
> program, nor a machine nor a body nor a brain can think. A person can
> think, and manifest eself (I follow Patrick for the pronouns) through a
> progra
Dear Russell and list:
this is a personal problem due to my extremely feeble skills in computering.
I had (optimistically in past tense) problems with my internet e-mail
connection and could not get/send e-mail since the date of this post. Then 2
times I was lucky and got hundreds of email at a ti
Title: Message
Hi everyone (in this world and all
relevantly similar ones :-),
I like the solution to the Induction /
Dragon / Exploding Cow problem that I see in work by Malcolm, Standish, Tegmark,
and Schmidhuber. So I forwarded references to Alexander Pruss, whose
dissertation raises th
Hal wrote:
>I actually think this is a philosphically defensible position. Why should >one OM care about another, merely because they happen to be linked by >a body? There's no a priori reason why an OM should sacrifice, it doesn't >get any benefit by doing so.
>But I'll tell you why we don't work
Tom wrote:
> Now if continuous consciousness is not necessarily required for immortality, then why are you > waiting around for copying? Won't cloning come far sooner? What is it about > copying that is better than cloning.
Stathis wrote:
> Why do you say that continuous consciousness is not ne
> Saibal Mitra writes:
>
> >Because no such thing as free will exists one has to consider three
> >different universes in which the three different choices are made. The
> >three
> >universes will have comparable measures. The antropic factor of 10^100
will
> >then dominate and will cause the ob
Le 14-juin-05, à 03:15, Russell Standish a écrit :
On Mon, Jun 13, 2005 at 11:45:52AM +0200, Bruno Marchal wrote:
To Russell: I don't understand what you mean by a "conscious
description". Even the expression "conscious" machine can be
misleading
at some point in the reasoning.
A descri
Le 13-juin-05, à 21:06, Jesse Mazer a écrit :
Hal Finney wrote:
Jesse Mazer writes:
> If you impose the condition I discussed earlier that absolute
probabilities
> don't change over time, or in terms of my analogy, that the water
levels in
> each tank don't change because the total inflow r
Hal Finney writes:
Let us consider these flavors of altruism in the case of Stathis' puzzle:
> You are one of 10 copies who are being tortured. The copies are all
being
> run in lockstep with each other, as would occur if 10 identical
computers
> were running 10 identical sentient programs. A
Le 14-juin-05, à 00:35, George Levy a écrit :
Bruno Marchal wrote:
Godel's theorem:
~Bf -> ~B(~Bf),
which is equivalent to B(Bf -> f) -> Bf,
Just a little aside a la Descartes + Godel: (assume that "think" and
"believe" are synonymou
- Original Message -
From: "Stathis Papaioannou" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>;
Sent: Tuesday, June 14, 2005 08:06 AM
Subject: Re: more torture
> Saibal Mitra writes:
>
> >Because no such thing as free will exists one has to consider three
> >different universes in which
14 matches
Mail list logo