Matt,

I would also note that you continue not to understand the difference between knowledge and data and contend that your 10^9 number is both entirely spurious and incorrect besides. I've read many times 1,000 books. I retain the vast majority of the *knowledge* in those books. I can't reproduce those books word for word by memory but that's not what intelligence is about AT ALL.

It doesn't matter if you agree with the number 10^9 or not. Whatever the number, either the AGI stores less information than the brain, in which case it is not AGI, or it stores more, in which case you can't know everything it does.

Information storage also has absolutely nothing to do with AGI (other than the fact that there probably is a minimum below which AGI can't fit). I know that my brain has far more information than is necessary for AGI (so the first part of your last statement is wrong). Further, I don't need to store everything that you know -- particularly if I have access to outside resources. My brain doesn't store all of the information in a phone book yet, effectively, I have total use of all of that information. Similarly, an AGI doesn't need to store 100% of the information that it uses. It simply needs to know where to find it upon need and how to use it.

----- Original Message ----- From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Tuesday, November 14, 2006 10:34 PM
Subject: Re: [agi] A question on the symbol-system hypothesis


I will try to answer several posts here. I said that the knowledge base of an AGI must be opaque because it has 10^9 bits of information, which is more than a person can comprehend. By opaque, I mean that you can't do any better by examining or modifying the internal representation than you could by examining or modifying the training data. For a text based AI with natural language ability, the 10^9 bits of training data would be about a gigabyte of text, about 1000 books. Of course you can sample it, add to it, edit it, search it, run various tests on it, and so on. What you can't do is read, write, or know all of it. There is no internal representation that you could convert it to that would allow you to do these things, because you still have 10^9 bits of information. It is a limitation of the human brain that it can't store more information than this.

It doesn't matter if you agree with the number 10^9 or not. Whatever the number, either the AGI stores less information than the brain, in which case it is not AGI, or it stores more, in which case you can't know everything it does.


Mark Waser wrote:

I certainly don't buy the "mystical" approach that says that sufficiently large neural nets will come up with sufficiently complex >discoveries that we can't understand them.



James Ratcliff wrote:

Having looked at the nueral network type AI algorithms, I dont see any fathomable way that that type of architecture could
create a full AGI by itself.



Nobody has created an AGI yet. Currently the only working model of intelligence we have is based on neural networks. Just because we can't understand it doesn't mean it is wrong.

James Ratcliff wrote:

Also it is a critical task for expert systems to explain why they are
doing what they are doing, and for business application,
I for one am
not goign to blindy trust what the AI says, without a little background.

I expect this ability to be part of a natural language model. However, any explanation will be based on the language model, not the internal workings of the knowledge representation. That remains opaque. For example:

Q: Why did you turn left here?
A: Because I need gas.

There is no need to explain that there is an opening in the traffic, that you can see a place where you can turn left without going off the road, that the gas gauge reads "E", and that you learned that turning the steering wheel counterclockwise makes the car turn left, even though all of this is part of the thought process. The language model is responsible for knowing that you already know this. There is no need either (or even the ability) to explain the sequence of neuron firings from your eyes to your arm muscles.

and this is one of the requirements for the Project Halo contest (took and passed the AP chemistry exam)
http://www.projecthalo.com/halotempl.asp?cid=30

This is a perfect example of why a transparent KR does not scale. The expert system described was coded from 70 pages of a chemistry textbook in 28 person-months. Assuming 1K bits per page, this is a rate of 4 minutes per bit, or 2500 times slower than transmitting the same knowledge as natural language.

Mark Waser wrote:
Given sufficient time, anything should be able to be understood and debugged.
...
    Give me *one* counter-example to  the above . . . .


Google. You cannot predict the results of a search. It does not help that you have full access to the Internet. It would not help even if Google gave you full access to their server.

When we build AGI, we will understand it the way we understand Google. We know how a search engine works. We will understand how learning works. But we will not be able to predict or control what we build, even if we poke inside.

-- Matt Mahoney, [EMAIL PROTECTED]





-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to