----- Original Message ----
From: Matt Mahoney <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 8:29:02 PM
Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI 
Dangers)

--- Vladimir Nesov <[EMAIL PROTECTED]> wrote: 
> Matt,
> 
> (I don't really expect you to give an answer to this question, as you
> didn't on a number of occasions before.) Can you describe
> mathematically what you mean by "understanding its own algorithm", and
> sketch a proof of why it's impossible?


Informally I mean there are circumstances (at least one) where you can't
predict what you are going to think without thinking it.
----------------
I don't want to get into a quibble fest, but understanding is not necessarily 
constrained to prediction.
Jim Bromer



      
____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to