Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-27 Thread Mark Waser
And again, *thank you* for a great pointer! - Original Message - From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]> To: Sent: Tuesday, May 27, 2008 8:04 AM Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...] On Monday 26 May 2008 0

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-27 Thread J Storrs Hall, PhD
On Monday 26 May 2008 09:55:14 am, Mark Waser wrote: > Josh, > > Thank you very much for the pointers (and replying so rapidly). You're welcome -- but also lucky; I read/reply to this list a bit sporadically in general. > > > You're very right that people misinterpret and over-extrapolate

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-26 Thread Mark Waser
When you try to use "logical" methods on an inductive (open and non-monotonic data space) the logical or rational methods will act more like heuristics at best. Yes, and they are entirely appropriate there as long as you realize the shortcomings as well as the advantages and you document your

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-26 Thread Jim Bromer
Mark Waser said: Rationality and irrationality are interesting subjects . . . . Many people who endlessly tout "rationally" use it as an exact synonym for logical correctness and then argue not only that irrational then means "logically incorrect" and therefore wrong but that anything that can

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-26 Thread Mark Waser
Josh, Thank you very much for the pointers (and replying so rapidly). You're very right that people misinterpret and over-extrapolate econ and game theory, but when properly understood and applied, they are a valuable tool for analyzing the forces shaping the further evolution of AGIs and i

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-26 Thread Mark Waser
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote: > Read the appendix, p37ff. He's not making arguments -- he's explaining, > with a > few pointers into the literature, some parts of completely standard and > accepted economics and game theory. It's all very basic stuff. The problem with "acc

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-26 Thread J Storrs Hall, PhD
On Monday 26 May 2008 06:55:48 am, Mark Waser wrote: > >> The problem with "accepted economics and game theory" is that in a proper > >> scientific sense, they actually prove very little and certainly far, FAR > >> less than people extrapolate them to mean (or worse yet, "prove"). > > > > Abusus no

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Richard Loosemore
J Storrs Hall, PhD wrote: On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote: This is NOT the paper that is under discussion. WRONG. This is the paper I'm discussing, and is therefore the paper under discussion. Josh, are you sure you're old enough to be using a computer without

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
In the context of Steve's paper, however, "rational" simply means an agent who does not have a preference circularity. On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote: > Rationality and irrationality are interesting subjects . . . . > > Many people who endlessly tout "rationally" use it as a

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote: > This is NOT the paper that is under discussion. WRONG. This is the paper I'm discussing, and is therefore the paper under discussion. --- agi Archives: http://www.listbox.com/member/archive/303

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote: > > Read the appendix, p37ff. He's not making arguments -- he's explaining, > > with a > > few pointers into the literature, some parts of completely standard and > > accepted economics and game theory. It's all very basic stuff. > > The proble

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Richard Loosemore
J Storrs Hall, PhD wrote: The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the literature, some parts of completely standard and accept

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Richard Loosemore
Jim Bromer wrote: - Original Message From: Richard Loosemore <[EMAIL PROTECTED]> Richard Loosemore said: If you look at his paper carefully, you will see that at every step of the way he introduces assumptions as if they were obvious facts ... and in all the cases I have bothered to t

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
rchitecture. I am even tempted to argue that Richard is so enamored with/ensnared in his MES vision that he may well be violating his own concerns about building complex systems. - Original Message - From: Jim Bromer To: agi@v2.listbox.com Sent: Sunday, May 25, 2008 2:22 PM S

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Jim Bromer
- Original Message From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]> The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the litera

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
ductive proofs -- which *require* closed systems). - Original Message - From: "Richard Loosemore" <[EMAIL PROTECTED]> To: Sent: Saturday, May 24, 2008 10:18 PM Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...] [EMAIL PROTECTED]

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
4, 2008 10:03 PM Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...] I was sitting in the room when they were talking about it and I didn't feel like speaking up at the time (why break my streak?) but I felt he was just wrong. It seemed like you could boil t

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
> To: Sent: Sunday, May 25, 2008 8:14 AM Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...] The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making argum

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Mark Waser
Original Message - From: Jim Bromer To: agi@v2.listbox.com Sent: Sunday, May 25, 2008 6:26 AM Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...] - Original Message From: Richard Loosemore <[EMAIL PROTECTED]> Richard Loos

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the literature, some parts of completely standard and accepted economics and game theory

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread Jim Bromer
- Original Message From: Richard Loosemore <[EMAIL PROTECTED]> Richard Loosemore said: If you look at his paper carefully, you will see that at every step of the way he introduces assumptions as if they were obvious facts ... and in all the cases I have bothered to think through, these

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Richard Loosemore
[EMAIL PROTECTED] wrote: I was sitting in the room when they were talking about it and I didn't feel like speaking up at the time (why break my streak?) but I felt he was just wrong. It seemed like you could boil the claim down to this: If you are sufficiently advanced, and you have a goal an

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread wannabe
I was sitting in the room when they were talking about it and I didn't feel like speaking up at the time (why break my streak?) but I felt he was just wrong. It seemed like you could boil the claim down to this: If you are sufficiently advanced, and you have a goal and some ability to mee

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Richard Loosemore
J Storrs Hall, PhD wrote: On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote: ...Omuhundro's claim... YES! But his argument is that to fulfill *any* motivation, there are generic submotivations (protect myself, accumulate power, don't let my motivation get perverted) that will further th

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Richard Loosemore
Mark Waser wrote: So if Omuhundro's claim rests on that fact that "being self improving" is part of the AGI's makeup, and that this will cause the AGI to do certain things, develop certain subgoals etc. I say that he has quietly inserted a *motivation* (or rather assumed it: does he ever say

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread J Storrs Hall, PhD
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote: > ...Omuhundro's claim... > YES! But his argument is that to fulfill *any* motivation, there are > generic submotivations (protect myself, accumulate power, don't let my > motivation get perverted) that will further the search to fulfill yo

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread Mark Waser
ur motivation. = = = = = As a relevant aside, you never answered my question regarding how you believed an MES system was different from a system with a *large* number of goal stacks. - Original Message - From: "Richard Loosemore" <[EMAIL PROTECTED]> To: Sent: Friday,

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-23 Thread Richard Loosemore
Mark Waser wrote: he makes a direct reference to goal driven systems, but even more important he declares that these bad behaviors will *not* be the result of us programming the behaviors in at the start but in an MES system nothing at all will happen unless the designer makes an explicit de

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-23 Thread Mark Waser
To: Sent: Friday, May 23, 2008 2:13 PM Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...] Kaj Sotala wrote: Richard, again, I must sincerely apologize for responding to this so horrendously late. It's a dreadful bad habit of mine: I get an e-mail

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-23 Thread Richard Loosemore
Kaj Sotala wrote: Richard, again, I must sincerely apologize for responding to this so horrendously late. It's a dreadful bad habit of mine: I get an e-mail (or blog comment, or forum message, or whatever) that requires some thought before I respond, so I don't answer it right away... and the

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-08 Thread Stan Nilsen
Steve, I suspect I'll regret asking, but... Does this rational belief make a difference to intelligence? (For the moment confining the idea of intelligence to making good choices.) If the AGI rationalized the existence of a higher power, what ultimate bad choice do you see as a result? (I'v

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Steve Richfield
Vladamir, On 5/7/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote: > > See http://www.overcomingbias.com/2008/01/newcombs-proble.html This is a PERFECT talking point for the central point that I have been trying to make. Belief in the Omega discussed early in that article is essentially a religious

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Charles D Hixson
Steve Richfield wrote: ... have played tournament chess. However, when faced with a REALLY GREAT chess player (e.g. national champion), as I have had the pleasure of on a couple of occasions, they at first appear to play as novices, making unusual and apparently stupid moves that I can't quite

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Vladimir Nesov
On Wed, May 7, 2008 at 11:14 AM, Steve Richfield <[EMAIL PROTECTED]> wrote: > > On 5/6/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > > As your example illustrates, a higher intelligence will appear to be > > irrational, but you cannot conclude from this that irrationality > > implies intelligen

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Kaj Sotala
On 5/7/08, Kaj Sotala <[EMAIL PROTECTED]> wrote: > Certainly a rational AGI may find it useful to appear irrational, but > that doesn't change the conclusion that it'll want to think rationally > at the bottom, does it? Oh - and see also http://www.saunalahti.fi/~tspro1/reasons.html , especially

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Steve Richfield
Kaj, On 5/6/08, Kaj Sotala <[EMAIL PROTECTED]> wrote: > > Certainly a rational AGI may find it useful to appear irrational, but > that doesn't change the conclusion that it'll want to think rationally > at the bottom, does it? The concept of rationality contains a large social component. For exa

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-07 Thread Steve Richfield
Matt, On 5/6/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > --- Steve Richfield <[EMAIL PROTECTED]> wrote: > > > I have played tournament chess. However, when faced with a REALLY > GREAT > > chess player (e.g. national champion), as I have had the pleasure of > > on a > > couple of occasions, the

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-06 Thread Kaj Sotala
On 5/7/08, Steve Richfield <[EMAIL PROTECTED]> wrote: > Story: I recently attended an SGI Buddhist meeting with a friend who was a > member there. After listening to their discussions, I asked if there was > anyone there (from ~30 people) who had ever found themselves in a position of > having t

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-06 Thread Matt Mahoney
--- Steve Richfield <[EMAIL PROTECTED]> wrote: > I have played tournament chess. However, when faced with a REALLY GREAT > chess player (e.g. national champion), as I have had the pleasure of > on a > couple of occasions, they at first appear to play as novices, making > unusual > and apparently s

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-06 Thread Steve Richfield
Kaj, Richard, et al, On 5/5/08, Kaj Sotala <[EMAIL PROTECTED]> wrote: > > > > Drive 2: AIs will want to be rational > > > This is basically just a special case of drive #1: rational agents > > > accomplish their goals better than irrational ones, and attempts at > > > self-improvement can be outri

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-05 Thread Kaj Sotala
Richard, again, I must sincerely apologize for responding to this so horrendously late. It's a dreadful bad habit of mine: I get an e-mail (or blog comment, or forum message, or whatever) that requires some thought before I respond, so I don't answer it right away... and then something related to

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-12 Thread Richard Loosemore
Charles D Hixson wrote: Richard Loosemore wrote: Kaj Sotala wrote: On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: ... goals. But now I ask: what exactly does this mean? In the context of a Goal Stack system, this would be represented by a top level goal that was stated in the know

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Charles D Hixson
Richard Loosemore wrote: Kaj Sotala wrote: On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: ... goals. But now I ask: what exactly does this mean? In the context of a Goal Stack system, this would be represented by a top level goal that was stated in the knowledge representation lan

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Mark Waser
>> Drive 1: AIs will want to self-improve >> This one seems fairly straightforward: indeed, for humans >> self-improvement seems to be an essential part in achieving pretty >> much *any* goal you are not immeaditly capable of achieving. If you >> don't know how to do something needed to achieve you

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Richard Loosemore
Kaj Sotala wrote: On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: Kaj Sotala wrote: > Alright. But previously, you said that Omohundro's paper, which to me > seemed to be a general analysis of the behavior of *any* minds with > (more or less) explict goals, looked like it was based on

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Mark Waser
my paper is anywhere close to final :-) - Original Message - From: "Kaj Sotala" <[EMAIL PROTECTED]> To: Sent: Tuesday, March 11, 2008 10:07 AM Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...] On 3/3/08, Richard Loosemore <[EMA

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-11 Thread Kaj Sotala
On 3/3/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Kaj Sotala wrote: > > Alright. But previously, you said that Omohundro's paper, which to me > > seemed to be a general analysis of the behavior of *any* minds with > > (more or less) explict goals, looked like it was based on a > > 'goal

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-03 Thread Richard Loosemore
Kaj Sotala wrote: On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: Kaj Sotala wrote: > Well, the basic gist was this: you say that AGIs can't be constructed > with built-in goals, because a "newborn" AGI doesn't yet have built up > the concepts needed to represent the goal. Yet humans

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-02 Thread eldras
la" <[EMAIL PROTECTED]> > To: agi@v2.listbox.com > Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity > Outcomes...] > Date: Sun, 2 Mar 2008 19:58:28 +0200 > > > On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: > > Kaj

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-03-02 Thread Kaj Sotala
On 2/16/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Kaj Sotala wrote: > > Well, the basic gist was this: you say that AGIs can't be constructed > > with built-in goals, because a "newborn" AGI doesn't yet have built up > > the concepts needed to represent the goal. Yet humans seem tend to

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-16 Thread Richard Loosemore
Kaj Sotala wrote: Gah, sorry for the awfully late response. Studies aren't leaving me the energy to respond to e-mails more often than once in a blue moon... On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: They would not operate at the "proposition level", so whatever diffi

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-15 Thread Kaj Sotala
Gah, sorry for the awfully late response. Studies aren't leaving me the energy to respond to e-mails more often than once in a blue moon... On Feb 4, 2008 8:49 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > They would not operate at the "proposition level", so whatever > difficulties they have

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-04 Thread Richard Loosemore
Kaj Sotala wrote: Richard, [Where's your blog? Oh, and this is a very useful discussion, as it's given me material for a possible essay of my own as well. :-)] It is in the process of being set up: I am currently wrestling with the process of getting to know the newest version (just released

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-02-03 Thread Kaj Sotala
On 1/30/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Kaj, > > [This is just a preliminary answer: I am composing a full essay now, > which will appear in my blog. This is such a complex debate that it > needs to be unpacked in a lot more detail than is possible here. Richard]. Richard, [

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Richard Loosemore
Kaj Sotala wrote: On Jan 29, 2008 6:52 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: Okay, sorry to hit you with incomprehensible technical detail, but maybe there is a chance that my garbled version of the real picture will strike a chord. The message to take home from all of this is that:

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Stan Nilsen
Kaj Sotala wrote: On Jan 29, 2008 6:52 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: Okay, sorry to hit you with incomprehensible technical detail, but maybe there is a chance that my garbled version of the real picture will strike a chord. The message to take home from all of this is that:

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Kaj Sotala
On Jan 29, 2008 6:52 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Okay, sorry to hit you with incomprehensible technical detail, but maybe > there is a chance that my garbled version of the real picture will > strike a chord. > > The message to take home from all of this is that: > > 1) There

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-29 Thread Richard Loosemore
Kaj Sotala wrote: On 1/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: Summary of the difference: 1) I am not even convinced that an AI driven by a GS will ever actually become generally intelligent, because of the self-contrdictions built into the idea of a goal stack. I am fairly sure th

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-29 Thread Kaj Sotala
On 1/29/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Summary of the difference: > > 1) I am not even convinced that an AI driven by a GS will ever actually > become generally intelligent, because of the self-contrdictions built > into the idea of a goal stack. I am fairly sure that whenever

[agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-28 Thread Richard Loosemore
Kaj Sotala wrote: On 1/24/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: Theoretically yes, but behind my comment was a deeper analysis (which I have posted before, I think) according to which it will actually be very difficult for a negative-outcome singularity to occur. I was really trying