--- Charles D Hixson <[EMAIL PROTECTED]>
wrote:
> Tom McCabe wrote:
> > -...
> > To quote:
> >
> > "I am not sure you are capable of following an
> > argument"
> >
> > If I'm not capable of even following an argument,
> it's
> > a pretty clear implication that I don't understand
> the
> > argumen
Tom McCabe wrote:
-...
To quote:
"I am not sure you are capable of following an
argument"
If I'm not capable of even following an argument, it's
a pretty clear implication that I don't understand the
argument.
You have thus far made no attempt that I have been able to detect to
justify the
--- Jef Allbright <[EMAIL PROTECTED]> wrote:
> On 7/2/07, Tom McCabe <[EMAIL PROTECTED]>
> wrote:
> > "
> > I am not sure you are capable of following an
> argument
> > in a manner that makes it worth my while to
> continue.
> >
> > - s"
> >
> > So, you're saying that I have no idea what I'm
> ta
On 7/2/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
"
I am not sure you are capable of following an argument
in a manner that makes it worth my while to continue.
- s"
So, you're saying that I have no idea what I'm talking
about, so therefore you're not going to bother arguing
with me anymore. Thi
"
I am not sure you are capable of following an argument
in a manner that makes it worth my while to continue.
- s"
So, you're saying that I have no idea what I'm talking
about, so therefore you're not going to bother arguing
with me anymore. This is a classic example of an ad
hominem argument. T
Tom McCabe wrote:
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
Tom McCabe wrote:
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
Out of the bazillions of possible ways to
configure
matter only a
ridiculously tiny fraction are more intelligent
th
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
> Tom McCabe wrote:
> > --- Samantha Atkins <[EMAIL PROTECTED]> wrote:
> >
> >
> >>
> >> Out of the bazillions of possible ways to
> configure
> >> matter only a
> >> ridiculously tiny fraction are more intelligent
> than
> >> a cockroach. Yet
Tom McCabe wrote:
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
Out of the bazillions of possible ways to configure
matter only a
ridiculously tiny fraction are more intelligent than
a cockroach. Yet
it did not take any grand design effort upfront to
arrive at a world
overrun when
On Tue, Jun 26, 2007 at 07:14:25PM +, Niels-Jeroen Vandamme wrote:
> ? Until the planet is overcrowded with your cyberclones.
Planet, shmanet. There's GLYrs of real estate right up there.
--
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
Until the planet is overcrowded with your cyberclones.
From: "Nathan Cook" <[EMAIL PROTECTED]>
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] critiques of Eliezer's views on AI (was: Re:
Personal attacks)
Date: Tue, 26 Ju
On Tue, Jun 26, 2007 at 10:14:04AM -0700, Tom McCabe wrote:
> > How about 20-30 sec of stopped blood flow. Instant
> > flat EEG. Or, hypothermia. Or, anaesthesia (barbies
> > are nice)
>
> This is human life, remember, so we had better be darn
> sure that all neuronal activity whatsoever has
> sto
--- Eugen Leitl <[EMAIL PROTECTED]> wrote:
> On Mon, Jun 25, 2007 at 11:53:09PM -0700, Tom McCabe
> wrote:
>
> > Not so much "anesthetic" as "liquid helium", I
> think,
>
> How about 20-30 sec of stopped blood flow. Instant
> flat EEG. Or, hypothermia. Or, anaesthesia (barbies
> are nice)
This
On 25/06/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
--- Nathan Cook <[EMAIL PROTECTED]> wrote:
> I don't wish to retread old arguments, but there are a few theoretical
outs.
> One could be uploaded bit by bit, one neuron at a time if necessary. One
> could be rendered unconscious, frozen, and s
On Mon, Jun 25, 2007 at 11:53:09PM -0700, Tom McCabe wrote:
> Not so much "anesthetic" as "liquid helium", I think,
How about 20-30 sec of stopped blood flow. Instant
flat EEG. Or, hypothermia. Or, anaesthesia (barbies
are nice)
> to be quadruply sure that all brain activity has
> stopped and th
On 5/31/07, Jey Kottalam <[EMAIL PROTECTED]> wrote:
on Google, but this returned 1,350 results. Are there any other
critiques I should be aware of? The only other one that I know of are
Bill Hibbard's at http://www.ssec.wisc.edu/~billh/g/mi.html . I
personally have not found much that I disagree
On Mon, Jun 25, 2007 at 06:20:51PM -0400, Colin Tate-Majcher wrote:
>
>When you talk about "uploading" are you referring to creating a copy
>of your consciousness? If that's the case then what do you do after
You die. The process is destructive.
>uploading, continue on with a medioc
Ants I'm not sure about, but many species are still
here only because we, as humans, are not simple
optimization processes that turn everything they see
into paperclips. Even so, we regularly do the exact
same thing that people say AIs won't do- we bulldoze
into some area, set up developments, and
Not so much "anesthetic" as "liquid helium", I think,
to be quadruply sure that all brain activity has
stopped and the physical self and virtual self don't
diverge. People do have brain activity even while
unconscious.
- Tom
--- Jey Kottalam <[EMAIL PROTECTED]> wrote:
> On 6/25/07, Papiewski, J
Matt Mahoney wrote:
--- Tom McCabe <[EMAIL PROTECTED]> wrote:
These questions, although important, have little to do
with the feasibility of FAI.
These questions are important because AGI is coming, friendly or not. Will
our AGIs cooperate or compete? Do we upload ourselves?
...
-
Kaj Sotala wrote:
On 6/22/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
Dividing things into us vs. them, and calling those that side with us
friendly seems to be instinctually human, but I don't think that it's a
universal. Even then, we are likely to ignore birds, ants that are
outside, and
On 6/22/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
And *my* best guess is that most super-humanly intelligent AIs will just
choose to go elsewhere, and leave us alone.
My personal opinion is Intelligence explosions, whether artificial or not,
lead to great diversity and varied personalit
On 6/25/07, Papiewski, John <[EMAIL PROTECTED]> wrote:
The only way to do it is to gradually replace your brain cells with an
artificial substitute.
We could instead anesthetize the crap out of you, upload you, turn on
your upload, and then make soylent green out of your original.
-Jey Kotta
AIL PROTECTED]
Sent: Monday, June 25, 2007 5:21 PM
To: singularity@v2.listbox.com
Subject: Re: [singularity] critiques of Eliezer's views on AI (was: Re:
Personal attacks)
When you talk about "uploading" are you referring to creating a copy of
your consciousness? If that
When you talk about "uploading" are you referring to creating a copy of your
consciousness? If that's the case then what do you do after uploading,
continue on with a mediocre existence while your cyber-duplicate shoots past
you? Sure, it would have all of those wonderful abilities you mention,
--- Nathan Cook <[EMAIL PROTECTED]> wrote:
> I don't wish to retread old arguments, but there are a few theoretical outs.
> One could be uploaded bit by bit, one neuron at a time if necessary. One
> could be rendered unconscious, frozen, and scanned. I would find this
> frightening, but preferable
On 24/06/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Do we upload? Consider the copy paradox. If there was an exact copy of
you,
atom for atom, and you had to choose between killing the copy or yourself,
I
think you would choose to kill the copy (and the copy would choose to kill
you). Does
--- Tom McCabe <[EMAIL PROTECTED]> wrote:
> These questions, although important, have little to do
> with the feasibility of FAI.
These questions are important because AGI is coming, friendly or not. Will
our AGIs cooperate or compete? Do we upload ourselves?
Consider the scenario of competin
These questions, although important, have little to do
with the feasibility of FAI. I think we can all agree
that the space of possible universe configurations
without sentient life of *any kind* is vastly larger
than the space of possible configurations with
sentient life, and designing an AGI to
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
>
> On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote:
>
> >
> > We can't "know it" in the sense of a mathematical
> > proof, but it is a trivial observation that out of
> the
> > bazillions of possible ways to configure matter,
> only
> > a ridiculou
I think I am missing something on this discussion of friendliness. We seem to
tacitly assume we know what it means to be friendly. For example, we assume
that an AGI that does not destroy the human race is more friendly than one
that does. We also want an AGI to obey our commands, cure disease,
On 6/22/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
Dividing things into us vs. them, and calling those that side with us
friendly seems to be instinctually human, but I don't think that it's a
universal. Even then, we are likely to ignore birds, ants that are
outside, and other things that
On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote:
We can't "know it" in the sense of a mathematical
proof, but it is a trivial observation that out of the
bazillions of possible ways to configure matter, only
a ridiculously tiny fraction are Friendly, and so it
is highly unlikely that a selected
And *my* best guess is that most super-humanly intelligent AIs will just
choose to go elsewhere, and leave us alone. (Well, most of those that
have any interest in personal survivalif you posit genetic AI as the
route to success, that will be most to all of them, but I'm much less
certain
Hi,
So, er, do you have an alternative proposal? Even if
the probability of A or B is low, if there are no
alternatives other than doom by old
age/nanowar/asteroid strike/virus/whatever, it is
still worthwhile to pursue them. Note that I don't
know how we could go about calculating what the
prob
--- Panu Horsmalahti <[EMAIL PROTECTED]> wrote:
> An AGI is not selected by random from all possible
> "minds", it is designed
> by humans, therefore you can't apply the probability
> from the assumption
> that most AI's are unfriendly.
True; there is likely some bias towards Friendliness
in AIs
An AGI is not selected by random from all possible "minds", it is designed
by humans, therefore you can't apply the probability from the assumption
that most AI's are unfriendly. There are many elements in the design of an
AGI that most researchers are likely to choose. I think it is safe to say
t
--- Benjamin Goertzel <[EMAIL PROTECTED]> wrote:
> > (Echoing Joshua Fox's request:) Ben, could you
> also tell us where you
> > disagree with Eliezer?
>
> Eliezer and I disagree on very many points, and also
> agree on very
> many points, but I'll mention a few key points here.
>
> (I also not
(Echoing Joshua Fox's request:) Ben, could you also tell us where you
disagree with Eliezer?
Eliezer and I disagree on very many points, and also agree on very
many points, but I'll mention a few key points here.
(I also note that Eliezer's opinions tend to be a moving target, so I
can't say fo
On 5/30/07, Jey Kottalam <[EMAIL PROTECTED]> wrote:
Could you provide some links to your critiques of Eliezer's ideas on
AI? I tried a search for "Russell Wallace site:http://sl4.org/archive";
on Google, but this returned 1,350 results.
Here's one where I try to sum things up:
http://sl4.org/
On 5/30/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
(For what it's worth, I think Eliezer's a brilliantly eloquent philosopher,
particularly in his writing about Bayesian epistemology. I also think his
ideas on AI are castles in the air with no connection whatsoever to reality,
and I've said a
40 matches
Mail list logo