On Fri, May 9, 2008 at 2:13 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > A rational agent only has to know that there are some things it cannot > compute. In particular, it cannot understand its own algorithm. >
Matt, (I don't really expect you to give an answer to this question, as you didn't on a number of occasions before.) Can you describe mathematically what you mean by "understanding its own algorithm", and sketch a proof of why it's impossible? -- Vladimir Nesov [EMAIL PROTECTED] ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4 Powered by Listbox: http://www.listbox.com