-- Assume there will be persistent objects in the 3D space

This is not innate.  Babies don't recognize that when an object is hidden from
view that it still exists.

I'm extremely familiar with the literature on object permanence; and the
truth seems to be that babies
**do** have some innate predisposition that makes it easier for them to
learn object permanence, even though
they also do need to learn it.

However, in a relatively stripped-down perceptual environment, Novamente
learns object permanence without
any need for an in-built heuristic bias.  But such biases may be more
necessary in the context of a richer, noisier
perceptual environment.

My point is that inductive biases might be very complex.  There is no simple
list. Psychologists have been studying this for years.
Yes, I know....  However, a rough, heuristic list can still be
interesting in terms of the perspective that
casts on AGI designs.  I'm not suggesting this as a principle for
motivating AGI designs, just as a helpful
perspective for looking at them once they exist.  In terms of Novamente,
I find this an interesting perspective
from which to think about the system....

My point about bees
knowing how to build hives is that a great deal of knowledge can be encoded in
DNA.  Bees cannot learn.  Humans can.  Humans aren't born knowing how to build
houses, but they are born knowing how to learn a complex set of skills needed
to build houses.  This is a harder problem.  There is no simple enumeration of
what can or can't be learned.  For example, we know that humans can learn
languages that have certain properties but not others, but there is currently
no simple test that we can apply to an arbitrary language to determine if a
human could learn it or not.

Yes, but there are still interesting and general questions regarding our
inbuilt inductive biases regarding
language learning: for instance, do we have an inductive bias to
recognize recursive structures in general?

If so this could help explain how we pick up recursive phrase structure
grammar so quickly...

Thoroughly explaining human psychology is a different problem than
extracting useful concepts from
human psychology to help guide AGI design.

AGI might still be harder than we think.  It has happened before.

Of course it might be -- we can't know for sure till the task of building an AGI is successfully done.

Maybe we can.  We tried the other approach in the late 1950's and now we know
why it failed.  I don't mean to be pessimistic, but we at least ought to
estimate the difficulty of the problem.  How much information do we need, in
bits?  The inductive bias (encoded in DNA) is almost certainly less than the
learned knowledge (encoded in synapses), but still it is a big number.  We
need a map to guide us to those problems for which success is likely.

I strongly disagree that the human genome, proteome, etc. needs to be
decoded in order to carry out
AGI design.  Human emulation is only one approach to AGI.

 It
might tell us, for example, that the most promising approach is to extract the
inductive bias from DNA analysis, or from dissections of the brain, or from
animal experiments, rather than trying to figure it out from scratch.  It
might tell us that some problems are easier than others, e.g. it might be
easier to build an AGI to manage a corporation than an AGI that can
distinguish good music from bad.
The whole idea of AGI as opposed to AI is that the software systems
should have at least the same
level of generality-of-intelligence as a human.

We are not yet at the point where we can model the brains of insects.  We need
to consider the possibility that human brains are more complex.
Yes, I agree that human brains are far more complex than insect brains.

However, I am not trying to create an AGI by emulating the human brain,
nor an insect brain, so this
observation is really not all that pertinent.  I am creating a
**different**, complex cognitive system, not
a mock brain.

-- Ben



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to