And again, *thank you* for a great pointer!
- Original Message -
From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, May 27, 2008 8:04 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
On Monday 26 May 2008 0
On Monday 26 May 2008 09:55:14 am, Mark Waser wrote:
> Josh,
>
> Thank you very much for the pointers (and replying so rapidly).
You're welcome -- but also lucky; I read/reply to this list a bit sporadically
in general.
>
> > You're very right that people misinterpret and over-extrapolate
When you try to use "logical" methods on an inductive (open and non-monotonic
data space) the logical or rational methods will act more like heuristics at
best.
Yes, and they are entirely appropriate there as long as you realize the
shortcomings as well as the advantages and you document your
Mark Waser said:
Rationality and irrationality are interesting subjects . . . .
Many people who endlessly tout "rationally" use it as an exact synonym for
logical correctness and then argue not only that irrational then means
"logically incorrect" and therefore wrong but that anything that can
Josh,
Thank you very much for the pointers (and replying so rapidly).
You're very right that people misinterpret and over-extrapolate econ and
game
theory, but when properly understood and applied, they are a valuable tool
for analyzing the forces shaping the further evolution of AGIs and i
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote:
> Read the appendix, p37ff. He's not making arguments -- he's explaining,
> with a
> few pointers into the literature, some parts of completely standard and
> accepted economics and game theory. It's all very basic stuff.
The problem with "acc
On Monday 26 May 2008 06:55:48 am, Mark Waser wrote:
> >> The problem with "accepted economics and game theory" is that in a proper
> >> scientific sense, they actually prove very little and certainly far, FAR
> >> less than people extrapolate them to mean (or worse yet, "prove").
> >
> > Abusus no
J Storrs Hall, PhD wrote:
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
This is NOT the paper that is under discussion.
WRONG.
This is the paper I'm discussing, and is therefore the paper under discussion.
Josh, are you sure you're old enough to be using a computer without
In the context of Steve's paper, however, "rational" simply means an agent who
does not have a preference circularity.
On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote:
> Rationality and irrationality are interesting subjects . . . .
>
> Many people who endlessly tout "rationally" use it as a
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote:
> This is NOT the paper that is under discussion.
WRONG.
This is the paper I'm discussing, and is therefore the paper under discussion.
---
agi
Archives: http://www.listbox.com/member/archive/303
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote:
> > Read the appendix, p37ff. He's not making arguments -- he's explaining,
> > with a
> > few pointers into the literature, some parts of completely standard and
> > accepted economics and game theory. It's all very basic stuff.
>
> The proble
J Storrs Hall, PhD wrote:
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the literature, some parts of completely standard and
accept
Jim Bromer wrote:
- Original Message
From: Richard Loosemore <[EMAIL PROTECTED]>
Richard Loosemore said:
If you look at his paper carefully, you will see that at every step of
the way he introduces assumptions as if they were obvious facts ... and
in all the cases I have bothered to t
rchitecture. I am even tempted to argue that Richard is
so enamored with/ensnared in his MES vision that he may well be violating his
own concerns about building complex systems.
- Original Message -
From: Jim Bromer
To: agi@v2.listbox.com
Sent: Sunday, May 25, 2008 2:22 PM
S
- Original Message
From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the litera
ductive proofs -- which *require* closed systems).
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Saturday, May 24, 2008 10:18 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
[EMAIL PROTECTED]
4, 2008 10:03 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
I was sitting in the room when they were talking about it and I didn't
feel like speaking up at the time (why break my streak?) but I felt he
was just wrong. It seemed like you could boil t
>
To:
Sent: Sunday, May 25, 2008 8:14 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making argum
Original Message -
From: Jim Bromer
To: agi@v2.listbox.com
Sent: Sunday, May 25, 2008 6:26 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
- Original Message
From: Richard Loosemore <[EMAIL PROTECTED]>
Richard Loos
The paper can be found at
http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
Read the appendix, p37ff. He's not making arguments -- he's explaining, with a
few pointers into the literature, some parts of completely standard and
accepted economics and game theory
- Original Message
From: Richard Loosemore <[EMAIL PROTECTED]>
Richard Loosemore said:
If you look at his paper carefully, you will see that at every step of
the way he introduces assumptions as if they were obvious facts ... and
in all the cases I have bothered to think through, these
[EMAIL PROTECTED] wrote:
I was sitting in the room when they were talking about it and I didn't
feel like speaking up at the time (why break my streak?) but I felt he
was just wrong. It seemed like you could boil the claim down to this:
If you are sufficiently advanced, and you have a goal an
I was sitting in the room when they were talking about it and I didn't
feel like speaking up at the time (why break my streak?) but I felt he
was just wrong. It seemed like you could boil the claim down to this:
If you are sufficiently advanced, and you have a goal and some
ability to mee
J Storrs Hall, PhD wrote:
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
...Omuhundro's claim...
YES! But his argument is that to fulfill *any* motivation, there are
generic submotivations (protect myself, accumulate power, don't let my
motivation get perverted) that will further th
Mark Waser wrote:
So if Omuhundro's claim rests on that fact that "being self improving"
is part of the AGI's makeup, and that this will cause the AGI to do
certain things, develop certain subgoals etc. I say that he has
quietly inserted a *motivation* (or rather assumed it: does he ever
say
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote:
> ...Omuhundro's claim...
> YES! But his argument is that to fulfill *any* motivation, there are
> generic submotivations (protect myself, accumulate power, don't let my
> motivation get perverted) that will further the search to fulfill yo
ur
motivation.
= = = = =
As a relevant aside, you never answered my question regarding how you
believed an MES system was different from a system with a *large* number of
goal stacks.
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Friday,
Mark Waser wrote:
he makes a direct reference to goal driven systems, but even more
important he declares that these bad behaviors will *not* be the result
of us programming the behaviors in at the start but in an MES
system nothing at all will happen unless the designer makes an explicit
de
To:
Sent: Friday, May 23, 2008 2:13 PM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
Kaj Sotala wrote:
Richard,
again, I must sincerely apologize for responding to this so horrendously
late. It's a dreadful bad habit of mine: I get an e-mail
Kaj Sotala wrote:
Richard,
again, I must sincerely apologize for responding to this so
horrendously late. It's a dreadful bad habit of mine: I get an e-mail
(or blog comment, or forum message, or whatever) that requires some
thought before I respond, so I don't answer it right away... and the
Steve,
I suspect I'll regret asking, but...
Does this rational belief make a difference to intelligence? (For the
moment confining the idea of intelligence to making good choices.)
If the AGI rationalized the existence of a higher power, what ultimate
bad choice do you see as a result? (I'v
Vladamir,
On 5/7/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>
> See http://www.overcomingbias.com/2008/01/newcombs-proble.html
This is a PERFECT talking point for the central point that I have been
trying to make. Belief in the Omega discussed early in that article is
essentially a religious
Steve Richfield wrote:
...
have played tournament chess. However, when faced with a REALLY GREAT
chess player (e.g. national champion), as I have had the pleasure of
on a couple of occasions, they at first appear to play as novices,
making unusual and apparently stupid moves that I can't quite
On Wed, May 7, 2008 at 11:14 AM, Steve Richfield
<[EMAIL PROTECTED]> wrote:
>
> On 5/6/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> > As your example illustrates, a higher intelligence will appear to be
> > irrational, but you cannot conclude from this that irrationality
> > implies intelligen
On 5/7/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> Certainly a rational AGI may find it useful to appear irrational, but
> that doesn't change the conclusion that it'll want to think rationally
> at the bottom, does it?
Oh - and see also http://www.saunalahti.fi/~tspro1/reasons.html ,
especially
Kaj,
On 5/6/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
>
> Certainly a rational AGI may find it useful to appear irrational, but
> that doesn't change the conclusion that it'll want to think rationally
> at the bottom, does it?
The concept of rationality contains a large social component. For exa
Matt,
On 5/6/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- Steve Richfield <[EMAIL PROTECTED]> wrote:
>
> > I have played tournament chess. However, when faced with a REALLY
> GREAT
> > chess player (e.g. national champion), as I have had the pleasure of
> > on a
> > couple of occasions, the
On 5/7/08, Steve Richfield <[EMAIL PROTECTED]> wrote:
> Story: I recently attended an SGI Buddhist meeting with a friend who was a
> member there. After listening to their discussions, I asked if there was
> anyone there (from ~30 people) who had ever found themselves in a position of
> having t
--- Steve Richfield <[EMAIL PROTECTED]> wrote:
> I have played tournament chess. However, when faced with a REALLY
GREAT
> chess player (e.g. national champion), as I have had the pleasure of
> on a
> couple of occasions, they at first appear to play as novices, making
> unusual
> and apparently s
Kaj, Richard, et al,
On 5/5/08, Kaj Sotala <[EMAIL PROTECTED]> wrote:
>
> > > Drive 2: AIs will want to be rational
> > > This is basically just a special case of drive #1: rational agents
> > > accomplish their goals better than irrational ones, and attempts at
> > > self-improvement can be outri
Richard,
again, I must sincerely apologize for responding to this so
horrendously late. It's a dreadful bad habit of mine: I get an e-mail
(or blog comment, or forum message, or whatever) that requires some
thought before I respond, so I don't answer it right away... and then
something related to
Charles D Hixson wrote:
Richard Loosemore wrote:
Kaj Sotala wrote:
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
...
goals.
But now I ask: what exactly does this mean?
In the context of a Goal Stack system, this would be represented by a
top level goal that was stated in the know
Richard Loosemore wrote:
Kaj Sotala wrote:
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
...
goals.
But now I ask: what exactly does this mean?
In the context of a Goal Stack system, this would be represented by a
top level goal that was stated in the knowledge representation
lan
>> Drive 1: AIs will want to self-improve
>> This one seems fairly straightforward: indeed, for humans
>> self-improvement seems to be an essential part in achieving pretty
>> much *any* goal you are not immeaditly capable of achieving. If you
>> don't know how to do something needed to achieve you
Kaj Sotala wrote:
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Kaj Sotala wrote:
> Alright. But previously, you said that Omohundro's paper, which to me
> seemed to be a general analysis of the behavior of *any* minds with
> (more or less) explict goals, looked like it was based on
my paper is anywhere close to final :-)
- Original Message -
From: "Kaj Sotala" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, March 11, 2008 10:07 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
On 3/3/08, Richard Loosemore <[EMA
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj Sotala wrote:
> > Alright. But previously, you said that Omohundro's paper, which to me
> > seemed to be a general analysis of the behavior of *any* minds with
> > (more or less) explict goals, looked like it was based on a
> > 'goal
Kaj Sotala wrote:
On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Kaj Sotala wrote:
> Well, the basic gist was this: you say that AGIs can't be constructed
> with built-in goals, because a "newborn" AGI doesn't yet have built up
> the concepts needed to represent the goal. Yet humans
la" <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
> Outcomes...]
> Date: Sun, 2 Mar 2008 19:58:28 +0200
>
>
> On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > Kaj
On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj Sotala wrote:
> > Well, the basic gist was this: you say that AGIs can't be constructed
> > with built-in goals, because a "newborn" AGI doesn't yet have built up
> > the concepts needed to represent the goal. Yet humans seem tend to
Kaj Sotala wrote:
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...
On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
They would not operate at the "proposition level", so whatever
diffi
Gah, sorry for the awfully late response. Studies aren't leaving me
the energy to respond to e-mails more often than once in a blue
moon...
On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> They would not operate at the "proposition level", so whatever
> difficulties they have
Kaj Sotala wrote:
Richard,
[Where's your blog? Oh, and this is a very useful discussion, as it's
given me material for a possible essay of my own as well. :-)]
It is in the process of being set up: I am currently wrestling with the
process of getting to know the newest version (just released
On 1/30/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Kaj,
>
> [This is just a preliminary answer: I am composing a full essay now,
> which will appear in my blog. This is such a complex debate that it
> needs to be unpacked in a lot more detail than is possible here. Richard].
Richard,
[
Kaj Sotala wrote:
On Jan 29, 2008 6:52 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.
The message to take home from all of this is that:
Kaj Sotala wrote:
On Jan 29, 2008 6:52 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.
The message to take home from all of this is that:
On Jan 29, 2008 6:52 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Okay, sorry to hit you with incomprehensible technical detail, but maybe
> there is a chance that my garbled version of the real picture will
> strike a chord.
>
> The message to take home from all of this is that:
>
> 1) There
Kaj Sotala wrote:
On 1/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Summary of the difference:
1) I am not even convinced that an AI driven by a GS will ever actually
become generally intelligent, because of the self-contrdictions built
into the idea of a goal stack. I am fairly sure th
On 1/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Summary of the difference:
>
> 1) I am not even convinced that an AI driven by a GS will ever actually
> become generally intelligent, because of the self-contrdictions built
> into the idea of a goal stack. I am fairly sure that whenever
Kaj Sotala wrote:
On 1/24/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Theoretically yes, but behind my comment was a deeper analysis (which I
have posted before, I think) according to which it will actually be very
difficult for a negative-outcome singularity to occur.
I was really trying
60 matches
Mail list logo