Re: [agi] Preservation of goals in strongly self-modifying AI systems

2008-08-31 Thread Matt Mahoney
In response to Ben's paper on goal preservation, I think that identifying attractors or fixed points requires that we identify sources of goal drift. Here are some: - Information loss - Software errors - Deliberate modification - Modification through learning - Evolution - Noise Information los

Re: [agi] Preservation of goals in strongly self-modifying AI systems

2008-08-31 Thread Ben Goertzel
Pei, > The concept of "top-level goals" (or "super goals") in this discussion > is often ambiguous. It can mean (1) the initial (given or built-in) > goal(s) from which all the other goals are derived, or (2) the > dominating goal when conflicts happen among goals. Many people > implicitly assume

Re: [agi] Preservation of goals in strongly self-modifying AI systems

2008-08-31 Thread Pei Wang
On Sat, Aug 30, 2008 at 11:15 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >> (1) Whether "goal drift" (I call "task alienation" in >> http://www.springer.com/west/home/computer/artificial?SGWID=4-147-22-173659733-0) >> is always undesired --- your paper treats it as obviously bad. > > It's not al

Re: [agi] Preservation of goals in strongly self-modifying AI systems

2008-08-30 Thread Ben Goertzel
On Sat, Aug 30, 2008 at 10:43 PM, Pei Wang <[EMAIL PROTECTED]> wrote: > Ben, > > Since your paper is on "preservation of AI goal systems under repeated > self-modification", I wonder whether you should address the following > related issues: > > (1) Whether "goal drift" (I call "task alienation" i

Re: [agi] Preservation of goals in strongly self-modifying AI systems

2008-08-30 Thread Pei Wang
Ben, Since your paper is on "preservation of AI goal systems under repeated self-modification", I wonder whether you should address the following related issues: (1) Whether "goal drift" (I call "task alienation" in http://www.springer.com/west/home/computer/artificial?SGWID=4-147-22-173659733-0)

Re: [agi] Preservation of goals in strongly self-modifying AI systems

2008-08-30 Thread Eric Burton
This is a good paper. Would read it again On 8/30/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > Hi > > All who interested in such topics and are willing to endure some raw > speculative trains of thought, > may be interested in an essay I recently posted on goal-preservation in > strongly self-mod

[agi] Preservation of goals in strongly self-modifying AI systems

2008-08-30 Thread Ben Goertzel
Hi All who interested in such topics and are willing to endure some raw speculative trains of thought, may be interested in an essay I recently posted on goal-preservation in strongly self-modifying systems, which is linked to from this blog post http://multiverseaccordingtoben.blogspot.com/2008/