Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 1:14 AM, Stan Nilsen [EMAIL PROTECTED] wrote: A test of understanding is if one can give a correct *explanation* for any and all of the possible outputs that it (the thing to understand) produces. Unfortunately, explanation is just as ambiguous a word as understanding,

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Steve Richfield
Matt, On 5/9/08, Matt Mahoney [EMAIL PROTECTED] wrote: After many postings on this subject, I still assert that ANY rational AGI would be religious. Not necessarily. You execute a program P that inputs the conditions of the game and outputs 1 box or 2 boxes. Omega executes a

Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread William Pearson
2008/5/10 Richard Loosemore [EMAIL PROTECTED]: This is still quite ambiguous on a number of levels, so would it be possible for you to give us a road map of where the argument is going? At the moment I am not sure what the theme is. That is because I am still ambiguous as to what the later

Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 8:38 AM, William Pearson [EMAIL PROTECTED] wrote: 2) A system similar to automatic programming that takes descriptions in a formal language given from the outside and potentially malicious sources and generates a program from them. The language would be sufficient to

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Stan Nilsen
I'm not understanding why an *explanation* would be ambiguous? If I have a process / function that consistently transforms x into y, then doesn't the process serve as a non-ambiguous explanation of how y came into being? (presuming this is the thing to be explained.) If I offer a theory and

Re: [agi] organising parallel processes, try2

2008-05-10 Thread rooftop8000
Do you think a hierarchy structure could be too restrictive? What if low-hierarchy processes need to make a snap decision to turn off high-level ones. How are new processes put into the hierarchy? What if a high-level process is faulty and should be deactivated?I think the 'scheduling' should be a

Re: [agi] organising parallel processes, try2

2008-05-10 Thread Stephen Reed
- Original Message From: rooftop8000 [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Saturday, May 10, 2008 12:35:49 PM Subject: Re: [agi] organising parallel processes, try2 Do you think a hierarchy structure could be too restrictive? No, I have not yet found a use case that would

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Stan Nilsen
Jim Bromer wrote: --- I think it is important to note that understanding a subject does not mean that you understand everything about the subject. This is not a reasonable proposal. I think Stan is saying that understanding an algorithm is giving an

Re: [agi] Self-maintaining Architecture first for AI

2008-05-10 Thread Russell Wallace
On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote: It depends on the system you are designing on. I think you can easily create as many types of sand box as you want in programming language E (1) for example. If the principle of least authority (2) is embedded in the

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Matt Mahoney
--- Steve Richfield [EMAIL PROTECTED] wrote: Matt, On 5/9/08, Matt Mahoney [EMAIL PROTECTED] wrote: After many postings on this subject, I still assert that ANY rational AGI would be religious. Not necessarily. You execute a program P that inputs the conditions of

Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI Dangers)

2008-05-10 Thread Matt Mahoney
--- Vladimir Nesov [EMAIL PROTECTED] wrote: On Sat, May 10, 2008 at 5:01 AM, Matt Mahoney [EMAIL PROTECTED] wrote: OK, let me make more clear the distinction between running a program and simulating it. Say that a program P simulates a program Q if for all y, P((Q,y)) = the output

[agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-10 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: I'm not understanding why an *explanation* would be ambiguous? If I have a process / function that consistently transforms x into y, then doesn't the process serve as a non-ambiguous explanation of how y came into being? (presuming this is the