http://www.ibm.com/blogs/think/2016/01/14/the-next-grand-challenge-computers-that-converse-like-people/

Jim Bromer

On Thu, Jan 14, 2016 at 10:20 AM, Jim Bromer <[email protected]> wrote:

> I watched the Presenti presentation on youtube a few days ago.
>
> Neural Networks can learn but they cannot use that learning efficiently in
> many important ways. Discrete AI can acquire more specific
> (discrete) 'objects' as they learn. So back in the 90's people started
> using hybrids that combined neural networks with discrete methods. Machine
> learning includes advances on hybrid methods.
>
> Most discrete methods are built around networks of relations between the
> data objects which represent 'concepts' or 'ideas' or 'knowledge' or 'know
> how' or whatever it is that you want to call the data objects that would be
> used to hold knowledge in a (more) discrete AI program. So a
> contemporary discrete AI program is also going to be an implementation of a
> network. The network may include numerical values but even if it doesn't
> it probably will represent categories of association. That definition is
> not meant to be complete because I am only trying to get an idea across:
> Modern discrete AI methods involve network methods that can potentially be
> seen as representatives of 'thought' that are more sophisticated than
> neural networks. That makes sense.
>
> Presenti was talking about an IBM researcher from the 70s who found that
> he could use statistical methods to *learn* about speech without a
> linguist. That would be a form of machine learning. Therefore it is fairly
> safe for me to conclude that Watson used machine learning in what Watson
> researcher's called, "Deep NLP".
>
> My question was why haven't there been clear advances in search engine
> technology in the 2 years since Deep Learning and Watson have made very
> obvious advances in AI?  I did an image search for "cats" on google and it
> was very good.  I only found one dog (a small dog which had been photo
> shopped with multiple legs somewhat like a caterpillar.) I tried some other
> searches on images and the results were also very good. The results were
> really amazing. So there have been some advances on image searches in the
> past 2 years. The search for "castles on the moon" did not distinguish
> between castles pictured as being on the moon from castles with the moon in
> the scene. So even though I am nit-picking to some extent the point is that
> it looks like you have to train a deep learning neural network with a
> narrow training sample in order to teach it to recognize something that
> would require a little thinking outside the box. That was also a problem
> with Watson. Its Deep NLP could be trained with all the questions from past
> Jeopardy shows (and Jeopardy-style questions that researchers could create)
> but can it be trained to handle juxtapositions of linguistic 'concepts'
> that might require some thinking outside of the box? (Incidentally I tried
> "cat in a box" and Google did very well. But when I tried "full stadium" it
> did include pictures of stadiums that were not empty. I could spot them as
> I was paging quickly through the images.) But I guess I there have been
> some significant advances in the past 2 years. They just do not include
> using language to refine your searches.
>
> My idea of Concept Integration is that different concepts cannot always be
> merged, as in a neural network to take an example, because as more concepts
> are integrated the requirements of a part of the conceptual integration may
> change. To restate that in another way,the integration of a number of
> concepts will typically change if additional concepts are integrated with
> them. This is what would happen if you tried to refine your search using
> conversation.
>
> Jim Bromer
>
>
> On Tue, Jan 12, 2016 at 10:21 PM, LAU <> wrote:
>
>> OK, as you wish ... It's just a word. We do not agree on the
>> signification. But, it's OK. If you call it "deep learning", or "conceptual
>> learning" ... or "Hakuna Matata's learning", it's not important. Stop
>> playing with words.
>>
>> Try to back to the topic of this thread. If I understand what you want to
>> promote :
>> 1) You note that deep learning implemented in the industry is not so
>> intelligent than espected taking in account the computation power available.
>> 2) Watson seems to less "narrow" than other implementations.
>> 3) What it miss there is "conceptual integration".
>> Correct me if I'm wrong.
>>
>>
>> In my humble opinion, there's no intelligent machines just because people
>> don't try to, or most likely don't figure out how to, make it more
>> intelligent.
>> Implementing"conceptual integration" is certainly a way that some
>> researchers tried, but lead to no significant results until now. If I look
>> at wikipedia, the theory is dated from 1990s. Twenty years later, still
>> nothing.
>>
>> There's no magic behind deep learning, I mean neural networks, used by
>> Google or Facebook. Very roughly, it's just a kind of "universal
>> approximator". And it's not the computation power that with make it
>> spontaneously more intelligent.
>> Deep learning becomes very popular these last years because it's easier
>> to make a neural network to accomplish picture or voice recognition task 
>> (*I've
>> made a small one myself from scratch in few days*) than handcrafted
>> codes, and for a better result.
>> But basically, a neural network is just another kind of programming.
>> Instead of coding a multitude of operation to achieve a complex task, a
>> neural network can do it itself by learning from examples.
>> And the question will be how to teach a neural network what is
>> "conceptual integration" ?
>>
>> In the paris tech conference video (*on youtube, but it's in french ...*),
>> Jerome Pesenti said something else interesting. He cite a IBM's 70s
>> researcher, Fred Jelinek, who said "*Every time I fire a linguist the
>> performance of the speech recognizer goes up*". The Jelinek speech
>> recognizer team was composed by part of linguist and engineers. By
>> replacing a linguist who treats language as a human do by an engineer who
>> does mathematics and statistics on words, the result is better. And it
>> seems to be the philosophy at IBM to work differently that a human do, and
>> it seems to give better result. Instead of playing jeopardy in human way,
>> watson applies statistics on the database (*which was wikipedia*).
>>
>> What I want to say is that may be the "conceptual integration" is a track
>> to explore for building AGI. Or, may be the solution will come from
>> elsewhere.
>>
>>
>> LAU
>>
>>
>>
>> Le 12/01/2016 10:46, Jim Bromer a écrit :
>>
>> Deep Learning is Deep Machine Learning and Machine Learning is in no
>> way limited to Neural Networks. So there is no way that Deep Learning
>> is going to be forever defined to refer Machine Learning that uses
>> Neural Networks (in certain ways). From that point of view I can say
>> that Watson-Jeopardy probably did use a kind of deep learning.
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/27172223-36de8e6c
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to