On 11/2/06, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:
Pei Wang wrote:
> On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
>
>> Moreover, I argue that language is built on top of a heavy inductive
>> bias to develop a certain conceptual structure, which then renders the
>> names of concepts highly salient so that they can be readily
>> learned. (This explains how we can learn 10 words a day, which
>> children routinely do.) An AGI might in principle be built on top of
>> some other
>> conceptual structure, and have great difficulty comprehending human
>> words-- mapping them onto its concepts, much less learning them.
>
> I think any AGI will need the ability to (1) using mental entities
> (concepts) to summarize percepts and actions, and (2) using concepts
> to extend past experience to new situations (reasoning). In this
> sense, the categorization/learning/reasoning (thinking) mechanisms of
> different AGIs may be very similar to each other, while the contents
> of their conceptual structures are very different, due to the
> differences in their sensors and effectors, as well as environments.

Pei, I suspect that what Baum is talking about is - metaphorically
speaking - the problem of an AI that runs on SVD talking to an AI that
runs on SVM.  (Singular Value Decomposition vs. Support Vector
Machines.)  Or the ability of an AI that runs on latent-structure Bayes
nets to exchange concepts with an AI that runs on decision trees.
Different AIs may carve up reality along different lines, so that even
if they label their concepts, it may take considerable extra computing
power for one of them to learn the other's concepts - it may not be
"natural" to them.  They may not be working in the same space of easily
learnable concepts.  Of course these examples are strictly metaphorical.
  But the point is that human concepts may not correspond to anything
that an AI can *natively* learn and *natively* process.

That is why I tried to distinguish "content" from "mechanism" --- a
robot with sonar as the only sensor and wheels as the only effectors
surely won't categorize the environment in our concepts. However, I
tend to believe that the relations among the robot's concepts are more
or less what I call "inheritance", "similarity", and so on, and its
reasoning rules are not that different from the ones we use.

Can we understand such a language? I'd say "yes, to a certain extent,
though not fully", as far as there are ways for our experience to be
related to that of the robot.

A superintelligence, or a sufficiently self-modifying AI, should not be
balked by English.  A superintelligence should carve up reality into
sufficiently fine grains that it can learn any concept computable by our
much smaller minds, unless P != NP and the concepts are genuinely
encrypted.  And a self-modifying AI should be able to natively run
whatever it likes.  This point, however, Baum may not agree with.

I'm afraid that there are no "sufficiently fine grains" that can serve
as the common "atoms" of different sensorimotor systems. They may
categorize the same environment in incompatible ways, which cannot be
reduced to a common language with "more detailed" concepts.

Pei

--
Eliezer S. Yudkowsky                          http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to