Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-13 Thread Jim Bromer
- Original Message Stan Nilsen wrote: After thinking a bit more, I see that there are other ways to understand that do not deal with process. I can think of two cases of a different kind of understanding (and there are others I'm sure) Case 1 example: Do you understand wood?

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-11 Thread Vladimir Nesov
On Sun, May 11, 2008 at 4:06 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Yes, but in this case the input to P is not (P,y), it is a self reference to whatever program P is running plus y. It's irrelevant, because description of P (or Q) could've been contained in the prefix that said simulate

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-11 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Sun, May 11, 2008 at 4:06 AM, Matt Mahoney [EMAIL PROTECTED] wrote: Yes, but in this case the input to P is not (P,y), it is a self reference to whatever program P is running plus y. It's irrelevant, because description of P (or Q)

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-11 Thread Vladimir Nesov
On Sun, May 11, 2008 at 8:57 PM, Matt Mahoney [EMAIL PROTECTED] wrote: If a machine P could simulate two other machines Q and R (each with n bits of memory), then P needs n+1 bits, n to reproduce all the states of Q or R, and 1 to remember which machine it is simulating. You described a

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-11 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Sun, May 11, 2008 at 8:57 PM, Matt Mahoney [EMAIL PROTECTED] wrote: If a machine P could simulate two other machines Q and R (each with n bits of memory), then P needs n+1 bits, n to reproduce all the states of Q or R, and 1 to remember

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 1:14 AM, Stan Nilsen [EMAIL PROTECTED] wrote: A test of understanding is if one can give a correct *explanation* for any and all of the possible outputs that it (the thing to understand) produces. Unfortunately, explanation is just as ambiguous a word as understanding,

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Steve Richfield
Matt, On 5/9/08, Matt Mahoney [EMAIL PROTECTED] wrote: After many postings on this subject, I still assert that ANY rational AGI would be religious. Not necessarily. You execute a program P that inputs the conditions of the game and outputs 1 box or 2 boxes. Omega executes a

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Stan Nilsen
I'm not understanding why an *explanation* would be ambiguous? If I have a process / function that consistently transforms x into y, then doesn't the process serve as a non-ambiguous explanation of how y came into being? (presuming this is the thing to be explained.) If I offer a theory and

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Stan Nilsen
Jim Bromer wrote: --- I think it is important to note that understanding a subject does not mean that you understand everything about the subject. This is not a reasonable proposal. I think Stan is saying that understanding an algorithm is giving an

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Matt Mahoney
--- Steve Richfield [EMAIL PROTECTED] wrote: Matt, On 5/9/08, Matt Mahoney [EMAIL PROTECTED] wrote: After many postings on this subject, I still assert that ANY rational AGI would be religious. Not necessarily. You execute a program P that inputs the conditions of

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Sat, May 10, 2008 at 5:01 AM, Matt Mahoney [EMAIL PROTECTED] wrote: OK, let me make more clear the distinction between running a program and simulating it. Say that a program P simulates a program Q if for all y, P((Q,y)) = the output

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Vladimir Nesov
On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I claim there is no P such that P(P,y) = P(y) for all y. (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for all y. -- Vladimir Nesov [EMAIL PROTECTED]

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer
- Original Message From: Matt Mahoney [EMAIL PROTECTED] --- Jim Bromer [EMAIL PROTECTED] wrote: I don't want to get into a quibble fest, but understanding is not necessarily constrained to prediction. What would be a good test for understanding an algorithm? -- Matt Mahoney,

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Stephen Reed
- Original Message From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, May 8, 2008 11:02:33 PM Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers) --- Jim Bromer [EMAIL PROTECTED] wrote: I don't want to get into a quibble fest

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Steve Richfield
Matt, On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote: Skill: Trimming the whitespace off both ends of a character string. One of the many annoyances of writing real-world AI programs is having to write this function; to replace the broken system functions that are supposed to do this, but

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Jim Bromer
On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote: Skill: Trimming the whitespace off both ends of a character string. One of the many annoyances of writing real-world AI programs is having to write this function; to replace the broken system functions that are supposed to do this, but which

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney
--- Steve Richfield [EMAIL PROTECTED] wrote: Matt, On 5/8/08, Matt Mahoney [EMAIL PROTECTED] wrote: --- Steve Richfield [EMAIL PROTECTED] wrote: On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote: See http://www.overcomingbias.com/2008/01/newcombs-proble.html After

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED] wrote: I claim there is no P such that P(P,y) = P(y) for all y. (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Vladimir Nesov
On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for all y. You're right. But we wouldn't say that the

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Stan Nilsen
Matt, You asked What would be a good test for understanding an algorithm? Thanks for posing this question. It has been a good exercise. Assuming that the key word here is understanding rather than algorithm, I submit - A test of understanding is if one can give a correct *explanation* for

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-09 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED] wrote: --- Vladimir Nesov [EMAIL PROTECTED] wrote: (I assume you mean something like P((P,y))=P(y)). If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for

Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Matt Mahoney
--- Steve Richfield [EMAIL PROTECTED] wrote: On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote: See http://www.overcomingbias.com/2008/01/newcombs-proble.html After many postings on this subject, I still assert that ANY rational AGI would be religious. Not necessarily. You execute a

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Vladimir Nesov
On Fri, May 9, 2008 at 2:13 AM, Matt Mahoney [EMAIL PROTECTED] wrote: A rational agent only has to know that there are some things it cannot compute. In particular, it cannot understand its own algorithm. Matt, (I don't really expect you to give an answer to this question, as you didn't on

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Jim Bromer
- Original Message From: Matt Mahoney [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, May 8, 2008 8:29:02 PM Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers) --- Vladimir Nesov [EMAIL PROTECTED] wrote: Matt, (I don't really expect you

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote: I don't want to get into a quibble fest, but understanding is not necessarily constrained to prediction. What would be a good test for understanding an algorithm? -- Matt Mahoney, [EMAIL PROTECTED] --- agi

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Steve Richfield
Matt, On 5/8/08, Matt Mahoney [EMAIL PROTECTED] wrote: --- Steve Richfield [EMAIL PROTECTED] wrote: On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote: See http://www.overcomingbias.com/2008/01/newcombs-proble.html After many postings on this subject, I still assert that ANY

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-08 Thread Russell Wallace
On Fri, May 9, 2008 at 1:51 AM, Jim Bromer [EMAIL PROTECTED] wrote: I don't want to get into a quibble fest, but understanding is not necessarily constrained to prediction. Indeed, understanding is a fuzzy word that means lots of different things in different contexts. In the context of