On Mon, Nov 24, 2008 at 4:20 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > I submitted my paper "A Model for Recursively Self Improving Programs" to > JAGI and it is ready for open review. For those who have already read it, it > is essentially the same paper except that I have expanded the abstract. The > paper describes a mathematical model of RSI in closed environments (e.g. > boxed AI) and shows that such programs exist in a certain sense. It can be > found here: > > http://journal.agi-network.org/Submissions/tabid/99/ctrl/ViewOneU/ID/9/Default.aspx
*Thud.* This was an interesting attempt to define RSI and I really thought you were going to prove something interesting from it. And then, at the last minute, on the last page - *thud*. Shane Legg, I don't mean to be harsh, but your attempt to link Kolmogorov complexity to intelligence is causing brain damage among impressionable youths. ( Link debunked here: http://www.overcomingbias.com/2008/11/complexity-and.html ) -- Eliezer Yudkowsky Research Fellow, Singularity Institute for Artificial Intelligence ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06 Powered by Listbox: http://www.listbox.com