Derek, Tim, There is no oversight: self-improvement doesn't necessarily refer to actual instance of self that is to be improved, but to AGI's design. Next thing must be better than previous one for runaway progress to happen, and one way of doing it is for next thing to be a refinement of previous thing. Self-improvement 'in place' may depending on nature of improvement be preferable, if it provides a way to efficiently transfer acquired knowledge from previous version to the next one (probably even without any modification).
On 10/12/07, Derek Zahn <[EMAIL PROTECTED]> wrote: > > Tim Freeman: > > > No value is > > added by introducing considerations about self-reference into > > conversations about the consequences of AI engineering. > > > > Junior geeks do find it impressive, though. > > The point of that conversation was to illustrate that if people are worried > about Seed AI exploding, then one option is to not build Seed AI (since that > is only one approach to developing AGI, and in fact I do not know of any > actual project that includes it at present). Quoting Yudkowsky: > > > The task is not to build an AI with some astronomical level > > of intelligence; the task is building an AI which is capable > > of improving itself, of understanding and rewriting its own > > source code. > > Perhaps only "junior geeks" like him find the concept relevant. You seem > to think that self-reference buys you nothing at all since it is a simple > matter for the first AGI projects to reinvent their own equivalent from > scratch, but I'm not sure that's true. > > ________________________________ > This list is sponsored by AGIRI: http://www.agiri.org/email > To unsubscribe or change your options, please go to: > http://v2.listbox.com/member/?& -- Vladimir Nesov mailto:[EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=53025431-7e3757