On 23 Oct 2006 at 12:59, Ben Goertzel wrote:
>>> Ditto with just about anything else that's at all innovative -- e.g. was
>>> Einstein's General Relativity a fundamental new breakthrough, or just a
>>> tweak on prior insights by Riemann and Hilbert?
>>     
>> I wonder if this is a sublime form of irony for a horribly naïve and
>> arrogant analogy to GR I drew on SL4 some years back :) 
> 
> Yes, I do remember that entertaining comment of yours, way back when... ;) 
> ... I assume you have backpedaled on that particular assertion by now, 
> though... 

Well, I still believe that there is a theoretical framework to AGI design
that will prove both incredibly useful in building AGIs in general and
pretty much essential to designing stable Friendly goal systems (and the
rational, causally clean AGIs to implement them). In fact I'm more sure of
that now than I was then. What was horribly wrong and naïve of me was
the implication that Eliezer had actually found/developed this framework,
as of early 2004. He'd heavily inspired my own progress up to that point,
and we had a somewhat-shared initial peak into what seemed like a new
realm of exciting possibilites for building verified, rational seed AIs,
and there was a huge clash of egos going on on SL4 at the time. What
can I say, I got carried away and started spouting SIAI-glorifying 
hyperbole, which I soon regretted. Though I have remained often-publicly
opposed to emergence and 'fuzzy' design since first realising what the true
consequences (of the heavily enhanced-GA-based system I was working
on at the time) were, as far as I know I haven't made that particular
mistake again.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to