Well I finally got a chance to review Push Singh's thesis.

Overall, my feeling is that the high-level cognitive architecture makes sense, but the knowledge representation and learning/reasoning methods proposed are nowhere near adequate to carry out the processes required by the cognitive architecture.

What the architecture seems to consist of is a nice way of organizing some fairly abstract "expert rules" into multiple levels, without any viable mechanism for learning new rules on the various levels. (The lack of a viable learning mechanism ties in with the knowledge representation of course -- they have avoided representations that involve uncertainty, which is easy to do ONLY if you're avoiding learning as well... learning from experience all of a sudden makes uncertainty inevitable...)

-- Ben



----- Original Message ----- From: "Wang, Pei" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Saturday, June 11, 2005 1:32 AM
Subject: [agi] an AGI by Minsky and Singh


FYI:

from http://groups.yahoo.com/group/ai-philosophy/messages

Pei

--- In [EMAIL PROTECTED], Marvin Minsky <[EMAIL PROTECTED]> wrote:


Do you know of any types of architectures that seem more promising?
In you opinion, what types of research might be on the right track?

Sure.  We are building one  with an architecture
based on descriptions in "The Emotion Machine"
entry on my home page.   A 'first draft' of this
architecture is described in Push Singh's PhD
thesis at
http://web.media.mit.edu/~push/push-thesis.html .
The thesis was finished just last month.

The architecture is based  on a set of ideas that
I have been developing since around 1980.  Push
Singh was the only student here who understood
the whole of it, and then went on to further
develop it.  Now we are beginning to program it,
but there are still a good many decisions to be
made.  Our topmost goal is to design it so that
we can later expand it, by including ways to
introduce many different kinds of knowledge and
processes.  One central idea for doing this trick
is not to connect the parts too rigidly, so that
as the system acquires more knowledge > can reflect on its own performance > able to figure out (and and then try to test) how
well other mental arrangements or 'ways to think'
will work.

We're look for others to help with this.
However, one trouble is that the ideas are so
good that it is hard to get adequate funding for
it.
'
--- End forwarded message ---



-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to