- Original Message
Stan Nilsen wrote:
After thinking a bit more, I see that there are other ways to
understand that do not deal with process. I can think of two cases
of a different kind of understanding (and there are others I'm sure)
Case 1 example: Do you understand wood?
On Sun, May 11, 2008 at 4:06 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Yes, but in this case the input to P is not (P,y), it is a self reference
to whatever program P is running plus y.
It's irrelevant, because description of P (or Q) could've been
contained in the prefix that said simulate
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, May 11, 2008 at 4:06 AM, Matt Mahoney [EMAIL PROTECTED]
wrote:
Yes, but in this case the input to P is not (P,y), it is a self
reference
to whatever program P is running plus y.
It's irrelevant, because description of P (or Q)
On Sun, May 11, 2008 at 8:57 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
If a machine P could simulate two other machines Q and R (each with n bits
of memory), then P needs n+1 bits, n to reproduce all the states of Q or
R, and 1 to remember which machine it is simulating. You described a
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sun, May 11, 2008 at 8:57 PM, Matt Mahoney [EMAIL PROTECTED]
wrote:
If a machine P could simulate two other machines Q and R (each with n
bits
of memory), then P needs n+1 bits, n to reproduce all the states of Q
or
R, and 1 to remember
On Sat, May 10, 2008 at 1:14 AM, Stan Nilsen [EMAIL PROTECTED] wrote:
A test of understanding is if one can give a correct *explanation* for any
and all of the possible outputs that it (the thing to understand) produces.
Unfortunately, explanation is just as ambiguous a word as
understanding,
Matt,
On 5/9/08, Matt Mahoney [EMAIL PROTECTED] wrote:
After many postings on this subject, I still assert that
ANY rational AGI would be religious.
Not necessarily. You execute a program P that inputs the conditions
of
the game and outputs 1 box or 2 boxes. Omega executes a
I'm not understanding why an *explanation* would be ambiguous? If I
have a process / function that consistently transforms x into y, then
doesn't the process serve as a non-ambiguous explanation of how y came
into being? (presuming this is the thing to be explained.)
If I offer a theory and
Jim Bromer wrote:
---
I think it is important to note that understanding a subject does not
mean that you understand everything about the subject. This is not a
reasonable proposal. I think Stan is saying that understanding an
algorithm is giving an
--- Steve Richfield [EMAIL PROTECTED] wrote:
Matt,
On 5/9/08, Matt Mahoney [EMAIL PROTECTED] wrote:
After many postings on this subject, I still assert that
ANY rational AGI would be religious.
Not necessarily. You execute a program P that inputs the
conditions
of
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sat, May 10, 2008 at 5:01 AM, Matt Mahoney [EMAIL PROTECTED]
wrote:
OK, let me make more clear the distinction between running a program
and
simulating it. Say that a program P simulates a program Q if for all
y,
P((Q,y)) = the output
On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
I claim there is no P such that P(P,y) = P(y) for all y.
(I assume you mean something like P((P,y))=P(y)).
If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for all y.
--
Vladimir Nesov
[EMAIL PROTECTED]
- Original Message
From: Matt Mahoney [EMAIL PROTECTED]
--- Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest, but understanding is not
necessarily constrained to prediction.
What would be a good test for understanding an algorithm?
-- Matt Mahoney,
- Original Message
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 11:02:33 PM
Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI
Dangers)
--- Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest
Matt,
On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote:
Skill: Trimming the whitespace off both ends of a character string.
One of the many annoyances of writing real-world AI programs is having to
write this function; to replace the broken system functions that are
supposed to do this, but
On 5/9/08, Stephen Reed [EMAIL PROTECTED] wrote:
Skill: Trimming the whitespace off both ends of a character string.
One of the many annoyances of writing real-world AI programs is having to write
this function; to replace the broken system functions that are supposed to do
this, but which
--- Steve Richfield [EMAIL PROTECTED] wrote:
Matt,
On 5/8/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Steve Richfield [EMAIL PROTECTED] wrote:
On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
After
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney [EMAIL PROTECTED]
wrote:
I claim there is no P such that P(P,y) = P(y) for all y.
(I assume you mean something like P((P,y))=P(y)).
If P(s)=0 (one answer to all questions), then P((P,y))=0 and
On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
(I assume you mean something like P((P,y))=P(y)).
If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for
all y.
You're right. But we wouldn't say that the
Matt,
You asked What would be a good test for understanding an algorithm?
Thanks for posing this question. It has been a good exercise. Assuming
that the key word here is understanding rather than algorithm, I submit -
A test of understanding is if one can give a correct *explanation* for
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney [EMAIL PROTECTED]
wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
(I assume you mean something like P((P,y))=P(y)).
If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0
for
--- Steve Richfield [EMAIL PROTECTED] wrote:
On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
After many postings on this subject, I still assert that
ANY rational AGI would be religious.
Not necessarily. You execute a
On Fri, May 9, 2008 at 2:13 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
A rational agent only has to know that there are some things it cannot
compute. In particular, it cannot understand its own algorithm.
Matt,
(I don't really expect you to give an answer to this question, as you
didn't on
- Original Message
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 8:29:02 PM
Subject: Re: Newcomb's Paradox (was Re: [agi] Goal Driven Systems and AI
Dangers)
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
Matt,
(I don't really expect you
--- Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest, but understanding is not
necessarily constrained to prediction.
What would be a good test for understanding an algorithm?
-- Matt Mahoney, [EMAIL PROTECTED]
---
agi
Matt,
On 5/8/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Steve Richfield [EMAIL PROTECTED] wrote:
On 5/7/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
See http://www.overcomingbias.com/2008/01/newcombs-proble.html
After many postings on this subject, I still assert that
ANY
On Fri, May 9, 2008 at 1:51 AM, Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest, but understanding is not
necessarily constrained to prediction.
Indeed, understanding is a fuzzy word that means lots of different
things in different contexts. In the context of
27 matches
Mail list logo