Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Jim Bromer
- Original Message From: Matt Mahoney [EMAIL PROTECTED] I don't claim that compression is simple. It is not. Text compression is AI-complete. The general problem is not even computable. ...I claim that compression can be used to measure intelligence. I explain in more detail at

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote: - Original Message From: Matt Mahoney [EMAIL PROTECTED] I don't claim that compression is simple. It is not. Text compression is AI-complete. The general problem is not even computable. ...I claim that compression can be used to

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-16 Thread Jim Bromer
I had said: But this means that you are advancing a purely speculative theory without any evidence to support it. Matt said: The evidence is described in my paper which you haven't read yet. I did glance at the paper and I don't think I will be

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-15 Thread Jim Bromer
- Original Message From: Matt Mahoney [EMAIL PROTECTED] Your question answering machine is algorithmically complex. A smaller program could describe a procedure for answering the questions, and in that case it could answer questions not in the original set of 1. Here is another

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-15 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote: You can try to find the fundamentals of intelligence, that is of algorithmic intelligence, but that does not mean that you will be able to produce intelligence before you find a theory that is complex enough to explain how artificial intelligence can be

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread William Pearson
Matt mahoney: I am not sure what you mean by AGI. I consider a measure of intelligence to be the degree to which goals are satisfied in a range of environments. It does not matter what the goals are. They may seem irrational to you. The goal of a smart bomb is to blow itself up at a

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Matt Mahoney
--- William Pearson [EMAIL PROTECTED] wrote: Matt mahoney: I propose prediction as a general test of understanding. For example, do you understand the sequence 0101010101010101 ? If I asked you to predict the next bit and you did so correctly, then I would say you understand it. What

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Jim Bromer
- Original Message Matt Mahoney said: Remember that the goal is to test for understanding in intelligent agents that are not necessarily human. What does it mean for a machine to understand something? What does it mean to understand a string of bits? I propose prediction as a general

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-14 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote: But, Understanding=compression. That is really pretty far out there. This conclusion is based on an argument like: One would be able to predict everything if he was able to understand everything (or at least everything predictable). This argument,

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Stan Nilsen
Matt Mahoney wrote: Remember that the goal is to test for understanding in intelligent agents that are not necessarily human. What does it mean for a machine to understand something? What does it mean to understand a string of bits? Have you considered testing intelligent agents by simply

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Remember that the goal is to test for understanding in intelligent agents that are not necessarily human. What does it mean for a machine to understand something? What does it mean to understand a string of bits?

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Stan Nilsen
Matt Mahoney wrote: --- Stan Nilsen [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Remember that the goal is to test for understanding in intelligent agents that are not necessarily human. What does it mean for a machine to understand something? What does it mean to understand a string of

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-13 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: Matt Mahoney wrote: --- Stan Nilsen [EMAIL PROTECTED] wrote: Matt Mahoney wrote: Remember that the goal is to test for understanding in intelligent agents that are not necessarily human. What does it mean for a machine to understand

Re: [agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-12 Thread Matt Mahoney
--- Jim Bromer [EMAIL PROTECTED] wrote: Matt Mahoney said, A formal explanation of a program P would be a equivalent program Q, such that P(x) = Q(x) for all x. Although it is not possible to prove equivalence in general, it is sometimes possible to prove nonequivalence by finding x such

[agi] Defining understanding (was Re: Newcomb's Paradox)

2008-05-10 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: I'm not understanding why an *explanation* would be ambiguous? If I have a process / function that consistently transforms x into y, then doesn't the process serve as a non-ambiguous explanation of how y came into being? (presuming this is the