The conclusion of that debate was that (a) images definitely play a role
in intelligence, and (b) non-imagistic (propositional) entities also
definitely play a role in intelligence, and (c) it is difficult to be
sure whether there are two separate kinds of representation or one kind
that can have two aspects.  In the end, the debate fizzled out because
it just became a pointless argument.



FWIW, in the Novamente architecture we have chosen for two separate
kinds of representation with mechanisms for conversion between the two.

On this as on many other issues, since we don't really know how the
human mind/brain, works the AGI designer has to make a decision based
on other grounds, while using neurosci and cog sci as loose guidance.

At the moment we have not implemented the "internal imagery" component
of NM but it is enshrined in our design docs and will be implemented and
tested in time, as resources allow...

NM will deal with internal imagery quite differently from the human brain,
via actually running a game engine internally (the same game engine
used in AgiSim, though without a visual front end ... key features of
the engine are collision detection and basic physics).  Internal movies
in the game engine may be described by the system propositionally,
and propositional knowledge may be used to generate images or
movies in the game engine (some of which may be quite abstract).

My strong suspicion is that the human mind/brain does have (at
least) two different kinds of representation, not just "one w/ two
aspects".
This is largely because it seems way simpler to implement or evolve
a dual representation of this nature ... I don't yet know how to make
a tractably efficient single representation that encompasses both
propositional and imagistic knowledge.

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to