Matt Mahoney wrote:
I will try to answer several posts here. I said that the knowledge
base of an AGI must be opaque because it has 10^9 bits of information,
which is more than a person can comprehend. By opaque, I mean that you
can't do any better by examining or modifying the internal
representation than you could by examining or modifying the training
data. For a text based AI with natural language ability, the 10^9 bits
of training data would be about a gigabyte of text, about 1000 books. Of
course you can sample it, add to it, edit it, search it, run various
tests on it, and so on. What you can't do is read, write, or know all of
it. There is no internal representation that you could convert it to
that would allow you to do these things, because you still have 10^9
bits of information. It is a limitation of the human brain that it can't
store more information than this.

"Understanding" 10^9 bits of information is not the same as storing 10^9 bits of information.

A typical painting in the Louvre might be 1 meter on a side. At roughly 16 pixels per millimeter, and a perceivable color depth of about 20 bits that would be about 10^8 bits. If an art specialist knew all about, say, 1000 paintings in the Louvre, that specialist would "understand" a total of about 10^11 bits.

You might be inclined to say that not all of those bits count, that many are redundant to "understanding".

Exactly.

People can easily comprehend 10^9 bits. It makes no sense to argue about degree of comprehension by quoting numbers of bits.


Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to