I think I follow your point here, but be careful to distinguish
Conceptual Integration from neural nets.  The  first is more like an
upper level meta-approach, while the latter is the next layer down in
some development stack, an implementation layer.  You could build
Conceptual Integration on top of neural nets or some symbolic-neural
hybrid.

Mike A

On 1/11/16, Jim Bromer <[email protected]> wrote:
> Thanks for that information. I will look for it when I get a chance.
> However, I thought that the "statistical analysis of vast, unstructured
> piles of text," that was mentioned by the writer in the MIT Press
> was probably like deep learning. So I am interested in reading the comments
> that Jerome Pesenti made.
>
> However, it is not relevant to the point I was trying to make. To repeat
> one more time: There have been many achievements in AI, some of which have
> surprised me. But if this is 'It' then why have search engines gone through
> a period of decline during the past 2 years. There could be implementation
> problems or even corporate ethical dilemmas - but I doubt that either of
> those are the problem.
>
> But to say that Watson or Deep Learning or Deep Searches are narrow AI
> because they lack generality is just not true. Their applications may have
> been 'relatively narrow' but the methods have broad generality. So the
> criticism of what these programs seem to lack has to be brought up a notch.
>
> My suggestion is that the thing that has been lacking is Conceptual
> Integration. But if this is a reasonable possibility, why haven't people
> been interested in discussing this? My answer now is that they either do
> not understand what I am talking about or they are in denial. Let me give
> an extreme example of someone in denial. Suppose someone believes that
> neural networks work like the mind works. Then they already have the answer
> to how AGI (or stronger AI) should be created. So even though they might
> recognize that different implementation methods have to be developed for
> their neural nets, they could still reject the idea that explicit
> discussion of Conceptual Integration could be beneficial. Why? Because they
> already have solved the fundamental problem and their neural networks do
> not involve an explicit modelling of Conceptual Integration (or of
> Concepts). So they simply deny that Conceptual Integration might be a key
> to solving contemporary AGI problems.
>
> If Conceptual Integration is 'It' then why am I not able to produce
> stronger AI? The reason is that I do not have all the answers to
> implementing Conceptual Integration in an actual AI program. I am still
> struggling just to explain what it is that I am talking about.
>
> Jim Bromer
>
> On Sun, Jan 10, 2016 at 10:56 PM, LAU <[email protected]> wrote:
>
>> There are few conference available with Jerome Pesenti, Vice President of
>> Watson Core Technologie, who talks about techniques inside Watson.
>>
>> Jerome Pesenti said at a conference (at Paris Tech, a french engineering
>> school, date unknown ~09/2015) that :
>> - Watson did not use deep learning in the jeopardy version
>> - But the system evolves continously, they are replacing many things in
>> Watson by deep learning.
>> He said that they are replacing codes in the jeopardy version by deep
>> learning because it's much more efficient in natural language processing
>> and others. With deep learning, there will soon be a version of jeopardy
>> for other languages than English.
>>
>> LAU
>>
>>
>>
>> Le 10/01/2016 15:02, Jim Bromer a écrit :
>>
>> I am interested in the question of whether Watson used deep learning in
>> the Jeopardy version because I am skeptical that there is a clear cut
>> distinction between hybrids of computational methods that train on large
>> corpora of data and Deep Learning. A few lines in a computer review does
>> not convince me. Are you saying (for instance) that the statistical
>> analysis that was used in Watson was not "Deep"? How could you know? What
>> are the differences?
>>
>> There are times when editorial criticisms are useful and there are times
>> when they are trivial to the issue being discussed. If I asked people to
>> read something that I posted on my website or which read like I might try
>> to get it published then I probably would appreciate comments about typos
>> and grammatical issues. Some time ago someone pointed out that I was
>> using
>> the word 'discreet' when the word should have been 'discrete'. I
>> appreciated knowing that I was making that mistake because it is
>> important
>> to the subject being discussed and I kept repeating the mistake. However,
>> he also made some put down suggesting that the fact that I was not using
>> the word 'discrete' when I meant 'discrete' showed that I did not know
>> too
>> much about computer programming. I disagree with that point of view
>> because
>> the word 'discreet' is, in my opinion, a very important concept in
>> psychology and a major problem in contemporary AI. I think contemporary
>> AI
>> programs lack discretion when confronted by interpretations that might
>> take multiple paths. So while my mistake was a serious one (when talking
>> in
>> a computer group) it was not an indication that I did not have too much
>> experience thinking about the subject of this group. A lack of discretion
>> can be taken as a lack of insight, but the psychology of discretion is,
>> in
>> my opinion, something that is very seriously lacking in contemporary
>> AI. Narrow AI can show some discretion as long as the problem is within
>> the
>> narrow range and the response is within the range of appropriate
>> responses.
>>
>> I would be interested in following up on the question of how Jeopardy's
>> Watson, which the reviewer said uses statistical analysis on vast
>> unstructured piles of text is essentially different from Deep Learning.
>>
>> The chess playing programs are narrow but similar methodologies can be
>> used for situations where 'positions' can be evaluated so the underlying
>> methods have much broader general applications.
>>
>> Conceptual integration is a thing that is very important to me. However,
>> I do not have it all figured out.
>>
>> But I can look at (simple) computational analyses and see, for example,
>> that not all the parts in an algorithm are alike. Operations can be
>> numbered and they can even - to some extent - be used in numerical
>> processes. However, that does not mean they are the same or can then be
>> used in the same way as the numerical operands of the function. So here
>> you
>> might see that knowledge about an operand and an operation In a
>> computer function can be useful as long as that knowledge is then
>> integrated in a suitable way. For another simple example, people will
>> sometimes try to take the enumeration of a column of the digits in an
>> arithmetic problem and treat is as if it were a digit (of one of the
>> operands). (A n-ary number will consist of digits in columns. For binary
>> the columns are the ones column, the twos column, the fours column and so
>> on.)  Using the ordinal value of a column might workout in some cases but
>> in others it might not because the ones in the columns will stand for 1
>> or
>> 2 or 4 or 8 and so on. So you have to keep track that an
>> explicit enumeration of the columns may have more than one meaning in an
>> algorithm. Suppose that this was the first time someone ever considered
>> this problem. In order to make sense of this he would have to be able to
>> integrate a number of very simple concepts. Even if someone is capable of
>> understanding the simple concepts when they are taken out of context
>> (what
>> do I mean by a column of a number, what do I mean by a digit, what do I
>> mean by an n-ary number, what do I mean by the enumeration of the columns
>> of a number can take on different meanings) they still might be totally
>> baffled by what I am talking about. Not only do they have to integrate
>> these different simple concepts they also have to do so in a very
>> discreet
>> way. They would need to try to integrate the concepts in different ways
>> but
>> show great discretion in limiting the number of the ways that they tried.
>> Just mashing all the concepts together and trying to make them all act
>> like
>> they were of the same kind of thing (a countable digit in this example)
>> isn't going to cut it.
>>
>> Jim Bromer
>>
>> On Sat, Jan 9, 2016 at 11:00 PM, John Smith <[email protected]> wrote:
>>
>>> "The idea that Deep Blue and Watson were not cases of Deep Learning is
>>> irrelevant. (You are effectively criticizing my topic headline rather
>>> than
>>> what I was getting at.)"
>>>
>>> Maybe you shouldn't have a title that says one thing while intending
>>> something else?
>>>
>>> "But, Deep Learning is being used in visual recognition and my feeling
>>> is
>>> that since Watson did use machine learning I believe that it must have
>>> used
>>> something that had some correspondence to Deep Learning."
>>>
>>> Your feeling is wrong, Watson didn't have deep learning when it won
>>> jeopardy, it was only added recently
>>> http://www.technologyreview.com/news/539226/ibm-pushes-deep-learning-with-a-watson-upgrade/
>>>  There are many kinds of machine learning that are different in kind
>>> from
>>> deep learning.
>>>
>>> "The argument that they were just narrow AI is also irrelevant."
>>>
>>> No it isn't, because narrow AI.. like a machine specifically designed to
>>> play chess, will not be able to do something like play checkers, or drive
>>> a
>>> car, or write poetry.  It will only be able to play chess.
>>>
>>> "There is no question that Watson and methodologies that are on par with
>>> contemporary Deep Learning have a wide variety of applications."   You
>>> know
>>> duck tape has lots of applications too..
>>>
>>> "So they are capable of some generalization."
>>>
>>> Again a chess playing machine can't do jack, but play chess, so too with
>>> a jeopardy playing machine.
>>>
>>> "Human beings, which represent the model of general intelligence, are
>>> not
>>> capable of figuring out many kinds of problems including many that
>>> computers can and will solve. "
>>>
>>> This is the first true thing you've said.
>>>
>>> "The problem is that these contemporary AI programs are not capable of
>>> integrated general intelligence and they are end up working within
>>> relatively narrow fields."
>>>
>>> Okay first of all please use proper grammar.  Second of all what is this
>>> "integrated" general intelligence you speak of, please define, and
>>> please
>>> keep in mind I'm a very simple person who has difficulty with
>>> obscure terminology that is only understood in the mind of the speaker.
>>>
>>> "But to say that they are narrow as opposed to genera is not quite
>>> right."
>>>
>>> So if someone creates an AI for playing chess and only chess.. it isn't
>>> narrow because you believe there are other applications for it?  This is
>>> just wrong.  The only way it would have other applications is if you
>>> spent
>>> the time to some how map your other application onto a chess board.  But
>>> that isn't the AI doing the generalizing, rather it is the user doing
>>> the
>>> generalizing.
>>>
>>> Narrow AI != General AI
>>> QED
>>>
>>> On Sat, Jan 9, 2016 at 10:19 PM, Jim Bromer <[email protected]> wrote:
>>>
>>>> The idea that Deep Blue and Watson were not cases of Deep Learning is
>>>> irrelevant. (You are effectively criticizing my topic headline rather
>>>> than
>>>> what I was getting at.)  But, Deep Learning is being used in visual
>>>> recognition and my feeling is that since Watson did use machine learning
>>>> I
>>>> believe that it must have used something that had some correspondence
>>>> to
>>>> Deep Learning.
>>>>
>>>> The argument that they were just narrow AI is also irrelevant. There is
>>>> no question that Watson and methodologies that are on par with
>>>> contemporary
>>>> Deep Learning have a wide variety of applications. So they are capable
>>>> of
>>>> some generalization. Human beings, which represent the model of general
>>>> intelligence, are not capable of figuring out many kinds of problems
>>>> including many that computers can and will solve. The problem is that
>>>> these
>>>> contemporary AI programs are not capable of integrated general
>>>> intelligence
>>>> and they are end up working within relatively narrow fields. But to say
>>>> that they are narrow as opposed to genera is not quite right.
>>>>
>>>> Jim Bromer
>>>>
>>>> On Sat, Jan 9, 2016 at 8:31 PM, John Smith <[email protected]> wrote:
>>>>
>>>>> "winning at chess (IBM Deep Blue [doesn't use deep
>>>>> learning]), recognizing objects in pictures (Many Companies and
>>>>> different algorithms [some just use mechanical turk]) and winning at
>>>>> jeopardy (IBM Watson [didn't use deep learning when it won at
>>>>> jeopardy])."
>>>>>
>>>>> So none of those achievements used deep learning.  Google's deep mind
>>>>> hasn't "solved intelligence" yet, so it would be a mistake to expect
>>>>> the
>>>>> kinds of advanced search capabilities you are thinking of.
>>>>>
>>>>> IBM did the Jeopardy grand challenge specifically because they saw
>>>>> Ken Jennings winning streak and the amount of attention it was
>>>>> attracting,
>>>>> and they thought if we create a software system that could do that we
>>>>> would
>>>>> get a great deal of attention, which I'm sure they thought would
>>>>> subsequently lead to big contracts.  So yes it was in a way a
>>>>> publicity
>>>>> stunt from its inception.  And since the algorithms were hand crafted
>>>>> for a
>>>>> single end (win at Jeopardy) of course it wasn't going to have a large
>>>>> impact on the field of AGI in general!  Watson wasn't AGI, it was the
>>>>> waste
>>>>> of time/money narrow AI that the short sighted people in industry find
>>>>> easy
>>>>> to sell.
>>>>>
>>>>> On Sat, Jan 9, 2016 at 3:34 PM, Jim Bromer <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> The hype and the implied conquest of AI that winning at chess,
>>>>>> recognizing objects in pictures and winning at jeopardy seems to
>>>>>> imply
>>>>>> just does not jive with the fact that search engine technology lacks
>>>>>> any noticeable intellect even though the computing power that Google,
>>>>>> Bing or IBM and thousands of other corporations possess is extremely
>>>>>> impressive.
>>>>>> Jim Bromer
>>>>>>
>>>>>>
>>>>>> On Sat, Jan 9, 2016 at 3:29 PM, Jim Bromer <[email protected]>
>>>>>> wrote:
>>>>>> > If industry has AI pretty well figured out then why are search
>>>>>> engines
>>>>>> > so incapable of thinking outside the box? The conclusion looks
>>>>>> > inescapable to me. Yes there will be a day when someone makes a
>>>>>> > significant achievement while the rest of us might miss it
>>>>>> > completely
>>>>>> > but the idea that contemporary deep search (or some other AI
>>>>>> > method)
>>>>>> > has achieved the hype or the implied conquest that winning at chess
>>>>>> > and jeopardy seems to imply just does not jive with the computing
>>>>>> > power Google, Bing or IBM have. There is a substantial disconnect
>>>>>> > between low level -almost- human reasoning and deep learning.
>>>>>> > Jim Bromer
>>>>>>
>>>>>>
>>>>>> -------------------------------------------
>>>>>> AGI
>>>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>>>> RSS Feed:
>>>>>> https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee
>>>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>>>> Powered by Listbox: http://www.listbox.com
>>>>>>
>>>>>
>>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>>> <http://www.listbox.com>
>>>>>
>>>>
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/26973278-698fd9ee> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/27172223-36de8e6c> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>
>
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to