Richard Loosemore <[EMAIL PROTECTED]> wrote:
> 5) I have looked at your paper and my feelings are exactly the same as
> Mark's theorems developed on erroneous assumptions are worthless.
Which assumptions are erroneous?
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: R
1. The fact that AIXI^tl is intractable is not relevant to the proof that
compression = intelligence, any more than the fact that AIXI is not computable.
In fact it is supporting because it says that both are hard problems, in
agreement with observation.
2. Do not confuse the two compressions.
3. If translating natural language to a structured representation is not
hard, then do it. People have been working on this for 50 years without
success. Doing logical inference is the easy part.
Actually, a more accurate statement would be "Doing individual logical
inference steps is the easy
Mark Waser wrote:
>Are you conceding that you can predict the results of a Google
search?
OK, you are right. You can type the same query twice. Or if you live long
enough you can do it the hard way. But you won't.
>Are you now conceding that it is not true that "Models that are simple eno
Matt Mahoney wrote:
Richard, what is your definition of "understanding"? How would you test
whether a person understands art?
Turing offered a behavioral test for intelligence. My understanding of
"understanding" is that it is something that requires intelligence. The
connection between in
The connection between intelligence and compression is not obvious.
The connection between intelligence and compression *is* obvious -- but
compression, particularly lossless compression, is clearly *NOT*
intelligence.
Intelligence compresses knowledge to ever simpler rules because that is a
It keeps a copy of the searchable part of the Internet in RAM
Sometimes I wonder why I argue with you when you throw around statements
like this that are this massively incorrect. Would you care to retract
this?
You could, in principle, model the Google server in a more powerful
machine an
Richard, what is your definition of "understanding"? How would you test
whether a person understands art?
Turing offered a behavioral test for intelligence. My understanding of
"understanding" is that it is something that requires intelligence. The
connection between intelligence and compres
You're drifting off topic . . . . Let me remind you of the flow of the
conversation.
You said:
Models that are simple enough to debug are too simple to scale.
The contents of a knowledge base for AGI will be beyond our ability to
comprehend.
I said:
>>> Given sufficient time, a
Matt Mahoney wrote:
Richard Loosemore <[EMAIL PROTECTED]> wrote:
"Understanding" 10^9 bits of information is not the same as storing 10^9
bits of information.
That is true. "Understanding" n bits is the same as compressing some larger
training set that has an algorithmic complexity of n bits
Sorry if I did not make clear the distinction between knowing the learning
algorithm for AGI (which we can do) and knowing what was learned (which we
can't).
My point about Google is to illustrate that distinction. The Google database
is about 10^14 bits. (It keeps a copy of the searchable pa
Richard Loosemore <[EMAIL PROTECTED]> wrote:
>"Understanding" 10^9 bits of information is not the same as storing 10^9
>bits of information.
That is true. "Understanding" n bits is the same as compressing some larger
training set that has an algorithmic complexity of n bits. Once you have done
1. No can do. The algorithmic complexity of parsing natural language as well
as an average adult human is around 10^9 bits. There is no "small" grammar for
English.
2. You need semantics to parse natural language. This is part of what makes it
hard. Or do you want a parser that gives you wr
Matt,
I would also note that you continue not to understand the difference between
knowledge and data and contend that your 10^9 number is both entirely
spurious and incorrect besides. I've read many times 1,000 books. I retain
the vast majority of the *knowledge* in those books. I can't re
Mark Waser wrote:
Given sufficient time, anything should be able to be understood and
debugged.
Give me *one* counter-example to the above . . . .
Matt Mahoney replied:
Google. You cannot predict the results of a search. It does not help
that you have full access to the Internet. It
Matt Mahoney wrote:
I will try to answer several posts here. I said that the knowledge
base of an AGI must be opaque because it has 10^9 bits of information,
which is more than a person can comprehend. By opaque, I mean that you
can't do any better by examining or modifying the internal
represent
Several things:
1. Someone suggested these parsers to me:
Eugene Charniak's
http://www.cog.brown.edu/Research/nlp/resources.html
Dan Bikel's
http://www.cis.upenn.edu/~dbikel/software.html
Demos for both are at:
http://lfg-demo.computing.dcu.ie/lfgparser.html
It seems that they are similar in
17 matches
Mail list logo