--- Charles D Hixson <[EMAIL PROTECTED]>
wrote:
> Tom McCabe wrote:
> > --- Eugen Leitl <[EMAIL PROTECTED]> wrote:
> >
> >
> >> On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom
> McCabe
> >> wrote:
> >>
> >>
> >>> Unless, of course, that human turns out to be
> evil
> >>>
> >> and
> >
Tom McCabe wrote:
--- Eugen Leitl <[EMAIL PROTECTED]> wrote:
On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom McCabe
wrote:
Unless, of course, that human turns out to be evil
and
That why you need to screen them, and build a group
with
checks and balances.
If our psycholo
--- Eugen Leitl <[EMAIL PROTECTED]> wrote:
> On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom McCabe
> wrote:
>
> > Unless, of course, that human turns out to be evil
> and
>
> That why you need to screen them, and build a group
> with
> checks and balances.
If our psychology is so advanced that
On Tue, Jun 05, 2007 at 01:24:04PM -0700, Tom McCabe wrote:
> Unless, of course, that human turns out to be evil and
That why you need to screen them, and build a group with
checks and balances.
> proceeds to use his power to create The Holocaust Part
> II. Seriously- out of all the people in po
Unless, of course, that human turns out to be evil and
proceeds to use his power to create The Holocaust Part
II. Seriously- out of all the people in positions of
power, a very large number are nasty jerks who abuse
that power. I can't think of a single great world
power that has not committed atro
On 31/05/2007, at 2:37 PM, Benjamin Goertzel wrote:
Eliezer considers my approach too risky in terms of the odds of
accidentally creating a nasty AI; I consider his approach to have
an overly high risk of delaying the advent of Friendly AI so long
that some other nasty danger wrecks humanity
Hi Joshua,
Eliezer and I aired our disagreements in a long series of long series of
emails on the SL4 list several years ago. As each of us understands the
other's views fairly well now, I don't think either of us has interest in
taking time to recapitulate the debate!
It's true that those old
I have disagreements with [Eliezer] on many essential points
Ben,
I'd be fascinated to see your disagreements elucidated. (I recall
reading about a difference of opinion is on whether guaranteed
Friendly AI is feasible. I'd like to see that explained in detail,
together with other points of di
Seems like SL4 has some whimsical booting policy so if you don't curtsey to
the moderator you get booted. That speaks for itself.
From: Russell Wallace [mailto:[EMAIL PROTECTED]
The long and the short of it is that Richard and Eliezer got into a slagging
match about who had the longest dic
The long and the short of it is that Richard and Eliezer got into a slagging
match about who had the longest dick and Eliezer lost his temper and booted
Richard. When it comes down to it, we've all done stupid crap like that;
it's part of human nature, we get on with our lives.
(For what it's wor
On May 29, 2007, at 11:25 AM, Richard Loosemore wrote:
Samantha Atkins wrote:
While I have my own doubts about Eliezer's approach and likelihood
of success and about the extent of his biases and limitations, I
don't consider it fruitful to continue to bash Eliezer on various
lists once yo
Richard,
am I reading you right when I'm interpreting you to be saying that
there are several distinguished cogsci professionals who have read
Yudkowsky's writings and have serious disagreements with his major
points?
Well, you could take pretty much anything ever written about the theory of
On 5/29/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
I know of people from outside these lists who have taken a look at some
of Eliezer's writings. These people would go much further than I would:
they think he is an insane, ill-informed megalomaniac who is able to
distract people from his
On May 29, 2007, at 2:25 PM, Richard Loosemore wrote:
Forget the nasty details of the SL4 episode, Samantha. Drop it.
Just look at the wider picture. Look at the critiques against
Yudkowsky for their content, and then try to imagine that any
rational, academic researcher in his right m
Samantha Atkins wrote:
While I have my own doubts about Eliezer's approach and likelihood of
success and about the extent of his biases and limitations, I don't
consider it fruitful to continue to bash Eliezer on various lists once
you feel seriously slighted by him or convinced that he is hope
While I have my own doubts about Eliezer's approach and likelihood of
success and about the extent of his biases and limitations, I don't
consider it fruitful to continue to bash Eliezer on various lists
once you feel seriously slighted by him or convinced that he is
hopelessly mired or wha
Aleksei Riikonen wrote:
On 5/28/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
That said, your statement does probably "summarize Yudkowsky's writings"
quite well. But why are you even trying to summarize the writings of a
raving narcissist who does not have any qualifications in the AI field
Of the people who have a history of disagreeing with Yudkowsky, can
you find anyone with some respectability who would find your
descriptions "raving narcissist" and "explodes into titanic outbursts
of uncontrollable, embarrassing rage when someone with real knowledge
of this area dares to disag
On 5/28/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
That said, your statement does probably "summarize Yudkowsky's writings"
quite well. But why are you even trying to summarize the writings of a
raving narcissist who does not have any qualifications in the AI field?
Someone who explodes
19 matches
Mail list logo