On 8/13/08, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:
>
>
> On 8/13/08, rick the ponderer <[EMAIL PROTECTED]> wrote:
> >
> > Reading this, I get the view of ai as basically neural networks, where
> each individual perceptron could be any of a number of algorithms
> (decision tree, random forest, svm etc).
> > I also get the view that academics such as Hinton are trying to find ways
> of automatically learning the network, whereas there could also be a
> parallel track of "engineering" the network, manually creating it perceptron
> by percetron, in the way Rodney Brooks advocates "bottom up" subsumption
> architecture.
> >
> > How does opencog relate to the above viewpoint. Is there something
> fundamentally flawed in the above as an approach to achieving agi.
>
> NN *may* be inadequate for AGI, because logic-based learning seems to be,
> at least for some datasets, more efficient than NN learning (that includes
> variants such as SVMs).  This has been my intuition for some time, and
> recently I've found a book that explores this issue in more detail.  See
> Chris Thorton 2000, "Truth from Trash -- how learning makes sense", MIT
> press, or some of his papers on his web site.
>
> To use Thorton's example, he demontrated that a "checkerboard" pattern can
> be learned using logic easily, but it will drive a NN learner crazy.
>
> It doesn't mean that the NN approach is hopeless, but it faces some
> challenges.  Or, maybe this intuition is wrong (ie, do such heavily
> "logical" datasets occur in real life?)
>
> YKY
> ------------------------------
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your 
> Subscription<http://www.listbox.com>
>
Thanks for replying YKY
Is the logic learning you are talking about inductive logic programming. If
so, isn't ilp basically a search through the space of logic programs (i may
be way off the mark here!), wouldn't it be too large of a search space to
explore if you're trying reach agi.

And if you're determined to learn a symbolic representation, wouldn't
genetic programming be a better choice, since it won't get stuck in local
minima.

Would neural networks be better in that case because they have the
mechanisms as in Geoff Hinton's paper that improve on random searching.

Also, if you did manage to learn a giant logic program that represented ai,
could it be easily parallelized the way a neural network can be (so that it
can run in real time).



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to