OK, as you wish ... It's just a word. We do not agree on the
signification. But, it's OK. If you call it "deep learning", or
"conceptual learning" ... or "Hakuna Matata's learning", it's not
important. Stop playing with words.
Try to back to the topic of this thread. If I understand what you want
to promote :
1) You note that deep learning implemented in the industry is not so
intelligent than espected taking in account the computation power available.
2) Watson seems to less "narrow" than other implementations.
3) What it miss there is "conceptual integration".
Correct me if I'm wrong.
In my humble opinion, there's no intelligent machines just because
people don't try to, or most likely don't figure out how to, make it
more intelligent.
Implementing"conceptual integration" is certainly a way that some
researchers tried, but lead to no significant results until now. If I
look at wikipedia, the theory is dated from 1990s. Twenty years later,
still nothing.
There's no magic behind deep learning, I mean neural networks, used by
Google or Facebook. Very roughly, it's just a kind of "universal
approximator". And it's not the computation power that with make it
spontaneously more intelligent.
Deep learning becomes very popular these last years because it's easier
to make a neural network to accomplish picture or voice recognition task
(/I've made a small one myself from scratch in few days/) than
handcrafted codes, and for a better result.
But basically, a neural network is just another kind of programming.
Instead of coding a multitude of operation to achieve a complex task, a
neural network can do it itself by learning from examples.
And the question will be how to teach a neural network what is
"conceptual integration" ?
In the paris tech conference video (/on youtube, but it's in french
.../), Jerome Pesenti said something else interesting. He cite a IBM's
70s researcher, Fred Jelinek, who said "/Every time I fire a linguist
the performance of the speech recognizer goes up/". The Jelinek speech
recognizer team was composed by part of linguist and engineers. By
replacing a linguist who treats language as a human do by an engineer
who does mathematics and statistics on words, the result is better. And
it seems to be the philosophy at IBM to work differently that a human
do, and it seems to give better result. Instead of playing jeopardy in
human way, watson applies statistics on the database (/which was
wikipedia/).
What I want to say is that may be the "conceptual integration" is a
track to explore for building AGI. Or, may be the solution will come
from elsewhere.
LAU
Le 12/01/2016 10:46, Jim Bromer a écrit :
Deep Learning is Deep Machine Learning and Machine Learning is in no
way limited to Neural Networks. So there is no way that Deep Learning
is going to be forever defined to refer Machine Learning that uses
Neural Networks (in certain ways). From that point of view I can say
that Watson-Jeopardy probably did use a kind of deep learning.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/27172223-36de8e6c
Modify Your Subscription: https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com