rooftop8000 wrote:
Hi, I've been thinking for a bit about how a big collaboration AI project
could work. I browsed the archives and i see you guys have similar
ideas....

I'd love to see someone build a system that is capable of adding any
kind of AI algorithm/idea to. It should unite the power of all existing
different flavors: neural nets, logical systems, etc

The Novamente core system actually does fits this description, but

1)
the API is in some places inelegant, though we have specific
plans for improving it

2)
it's C++, which some folks don't like

3)
it currently only runs on Unix systems, though a Windows port
will likely be made during the next month, as it happens

4)
it is proprietary


If there would be use for such a thing, I would consider open-sourcing
the Novamente core system, separate from the specific learning modules
we have created to go with it.  I would only do so after the inelegancies
mentioned above (point 1) are resolved though.

My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI.
Novamente consists of a set of agents that have been very carefully
sculpted to work together in such a way as to (when fully implemented
and tuned) give rise to the right overall emergent structures.

The Webmind system I was involved with in the late 90's was more
of a heterogeneous agents architecture, but through that experience
I became convinced that such an approach, while workable in principle,
has too much potential to lead to massive-dimensional parameter-
tuning nightmares...

This gets into my biggest dispute w/Minsky (and Push Singh): they
really think intelligence is just about hooking together a sufficiently
powerful community of agents/critics/resources whatever, whereas
I think it's about hooking together a community of learning algorithms
that is specifically configured to give rise to the right emergent
structures/dynamics.
Minsky is not big on emergence, and I don't
feel he understands the real nature of "self" very well.  He tends to
look at self as "just another aspect of the system" whereas I look at it
as a high-level emergent pattern that comes about holistically in a
system when the parts are configured to work together properly.

Relatedly, I don't think he understands the combined distributed/
localized nature of knowledge representation.  Even if a certain
faculty or piece of knowledge X is associated with some localized
agent or memory store, you should view that localized element
as a kind of "key" for accessing the global, system-wide activation
pattern associated with X.  Thus, in thinking about each local
part of your AGI system, you need to think about its impact
on the collective, self-organizing dynamics of the whole.

But when you think this way, an AGI starts to seem less like
a heterogenous madhouse of diverse learning agents and like
something more particularly structured ... even though it may
still live within an agents architecture that has general potential..

-- Ben G


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to