YKY made some points about the existence of conflict issues between different AGI theorists ...

So, the way I see it, the question is how to reconcile different ways of doing things so that we can work together and achieve our common goal more effectively.
 
Since there is no unique solution to the AGI problem, and each of us may have some near-optimal solutions in some domains and not-so-optimal solutions in other domains, and probably no one has THE optimal solution, we can perhaps make some compromising and eclectic arrangements.
 
I know that doing this would involve some pain.  It's a tautology that everyone thinks his/her solution is the best solution (otherwise s/he would have changed it).
 
How about this:  we can make a list of conflict issues, and try to resolve them by having a mix of decision making by different parties.  It doesn't have to follow rigid rules.


Along these lines, one thing that is frustrating is that it's often hard to tell how different the ideas proposed by differen theorists are, because different people use different formalisms and different informal vocabularies to discuss them. 

For example, I recently went through the exercise of matching up Novamente's goal/action subsystem with Stan Franklin's LIDA system's goal/action subsystem, point by point.  Significant overlaps and significant differences were revealed, but the main point I want to make is that this exercise was a big pain.  Most of the work consisted of figuring out what Stan actually means by various terms he uses in discussing his system, such as "attention codelet", "skybox", "stable coalition of processes", etc.  To really map the relations btw his architecture and Novamente required me to dig very deep into his architecture -- deeper than I've dug into most competing AGI architectures, frankly (with the exception of NARS). 

Relatedly, I recall that when talking to Pei Wang about AGI in the late 1990's, when we used to work together, we often spent the first 1/3 of a conversation just arriving at a clear mutual understanding of the terms we were trying to use.

This may seem a trivial point, but actually I feel that in the AGI/cog-sci literature now there is lack of agreement in usage on very basic terms such as (to name just a few):
-- symbol grounding
-- logic
-- emergence
-- perception

In order to make inter-AGI-system comparison easier, and also make discussion and analysis of individual AGI systems easier, it might be useful to have some kind of general ontology of cognitive processes, and properties of cognitive processes.   In  genetics we have the Gene Ontology,

http://www.geneontology.org/

which is a standard ontology of biological processes, molecular functions and cellular components.  It would perhaps be worthwhile to develop a Mind Ontology, along vaguely similar lines.

I suggest that the AGIRI wiki would be a decent place for such an  ontology to sit (and thus, implicitly, that a wiki would be a good tool to use for building such an ontology):

http://www.agiri.org/wiki/index.php/Main_Page

The idea of building such an ontology interests me a lot.  I don't think I'll have a lot of time to contribute to it in the near future, but,  I am interested enough to volunteer to get the ball rolling sometime in the next few weeks.  Look for a post within the next month indicating that the very top level of a Mind Ontology has been posted to the AGIRI wiki ;-) ... My hope is that this will be able to serve as a medium for better arriving at a collaborative understanding of AGI, via introducing a "normalized, controlled" vocabulary for discussing AGI concepts.

Comments?  Suggestions?

-- Ben

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to