On 9/5/06, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:

Learning is actually the most difficult / most open-ended aspect in AGI.  Most people 
equate machine learning with things like neural networks or support vector machines, but 
logic can be a substrate for learning too.  One example is inductive logic programming 
(ILP) which can learn Prolog-like rules from examples.  For example, we can present 10 
examples of "bottle" to the AGI and it will learn a logical rule that defines 
bottles   in general.  The logical kind of machine learning is actually more similar to 
human learning because it requires relatively fewer examples (unlike NN training).

I do not see the point in only allowing a single learning paradigm. I
believe people learn by different methods, so why shouldn't an AI do
the same? Both supervised learning using induction (which in some
sense both NN and ILP does) and unsupervised learning by trial and
error, like reinforcement learning are useful methods. Some of the
most successful learning methods have been deviced by combining RL and
NN (see for example the work of M. Riedmiller et al
http://www.ni.uos.de/).

/Fredrik

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to