On Wed, Jan 13, 2016 at 3:27 AM, Daniel Lewis <[email protected]>
wrote:

> Hi Jim,
> You say "what I call x" twice, where x is in:
> ["Semi-Strong AI", "Conceptual Integration"]
>
> I think that you may derive some answers for your speculations if you were
> to (more) formally define your concept of these terms that you have in your
> questions. Let us know your definitions, and then I think you'll get some
> really interesting discussion going on.
>

"Concept Integration" is a term that has been used in psychology. Synthesis
(of different ideas) is a term which can be used as a starting point to
understand what I mean by concept integration but it does not get into the
structural details of what would be needed in an AI program. I just read in
Wikipedia that there was has been an attempt to make a psychometric version
of the term which, I presume, was based on creating some tests where the
results could be used in comparisons. I have never seen the tests.

I believe that explicitly dealing with the problem of Conceptual
Integration is a necessary step in contemporary AI research. I think that
some people who understand what it is that I am trying to get at feel that
Conceptual Integration is more of an expected resultant of good AI methods
than a basis for it. It is something that AI researchers have always been
looking for even though they may not have used that term. Earlier AI models
did seem to strive for more explicit depictions of the issues but the
effort to define a solid ontology (of what is necessary for stronger AI)
never developed fully - probably because computers were not powerful enough
in the old days to get much traction.  I do not feel that pure Neural
Networks, for example, can be used effectively to promote Conceptual
Integration except as an explicit narrowly defined training goal. So, to
give you an example,  searching for "cat in a box" in Google Images
produces spectacular results, searching for "cat getting out of a box"
produces a lot of misses. (The point is that the search engine was unable
to do even weak concept integration between the concepts of "x in y," and
"x getting out of y".  I used a traditional formal definition in
traditional programming variables to show that the phrases "in" and
"getting out of" are strongly related. However, this is not the only way
conceptual integration might be expressed formally and I am not suggesting
that Concept Integration should be predefined using specialized details, or
narrow classes. I think that the majority of Concepts and relationships
between Concepts and methods of Concept Integration need to be learned. But
looking at the example of 'x in y' and 'x getting out of y' the phrase,
"getting out of" seems to denote an active state of an animal. Of course it
can be extended to animated creatures like robots and cartoons or objects
that can be characterized as having some kind of will or experience of
moving. So while 'x in y' and 'x getting out of y' may seem like a
transition it is not characteristic of all objects x that can be 'x in y'.
This means that there are a lot of branching points and
cross-generalization points in concepts and those variations do typically
have some impact on loosely determining what other kinds of concepts can be
applied, or, perhaps more typically, the characteristics of the effects
that will occur if some other concepts are applied to the situation. So
for another related example, 'x sleeping in y' may be characteristic of an
animal like a 'cat' but it would not be characteristic of a 'box' that is
'in a container' for example.

So to try to finish this, concept integration may be strongly sequential or
not, transitional, spatial, dependent on mechanical relations or on
relations of thought, dependent on stories, they are usually conditional
and on and on and all sorts of combinations of these. The definitions of an
ontology of properties and methods of Concept Integration cannot be
completely pre-defined because they are themselves concepts. So if someone
thinks I am talking about a totally predefined ontology or a system of
simple variable-word sentences they do not understand what I just said.

Jim Bromer


On Wed, Jan 13, 2016 at 3:27 AM, Daniel Lewis <[email protected]>
wrote:

> Hi Jim,
>
> Thank you for articulating your interesting thoughts.
>
> You say "what I call x" twice, where x is in:
> ["Semi-Strong AI", "Conceptual Integration"]
>
> I think that you may derive some answers for your speculations if you were
> to (more) formally define your concept of these terms that you have in your
> questions. Let us know your definitions, and then I think you'll get some
> really interesting discussion going on.
>
> Many thanks,
> Daniel Lewis
>
>
> On Tue, Jan 12, 2016 at 1:05 PM, Jim Bromer <[email protected]> wrote:
>
>> Even though Watson-Jeopardy did not use Neural Networks or something
>> that was intuitively similar to them, I believe it was an example of
>> deep learning. But the question that many of us are more interested in
>> is was it an example of Narrow AI? My first response is that it is not
>> because it can be applied to such a wide range of problems (even out
>> of the box-or out of the virtual box). So then, why isn't it AGI? Why
>> can't it think outside the box? Why does it not demonstrate the traits
>> of what I call semi-strong AI? This question bothered me but I think I
>> finally have figured it out.
>>
>> Part of the answer is that it (probably) is not very good at what I
>> call Conceptual Integration. But that does not really answer the
>> question adequately.
>>
>> I think they were able to eliminate the Frame Problem because the
>> Jeopardy system was explicitly designed for Q&A. The relevancy problem
>> (a form of the frame problem) occurs because most questions can lead
>> to a combinatorial explosion of possibilities. But by focusing on
>> specific kinds of questions which have distinctive characteristics
>> they could eliminate many kinds of open ended questions.
>>
>> For example, is it likely that I will create an actual AI program
>> (that does something novel) or is it unlikely?  Right now I can't
>> answer that question. Not only is an open ended question but it is
>> also a question which does not have a well-defined answer path.
>> However, I could make long arguments supporting either possibility. I
>> think I noted this a few years ago but a Jeopardy question has to have
>> a historical, encyclopedic or journalistic entry to support it. When
>> you look at Watson's second choices to its questions many of them
>> seemed to be surprisingly irrelevant.
>>
>> But the Q&A frame really does not narrow the question about why it
>> worked sufficiently. Extensive knowledge about NLP, both from earlier
>> sources and derived by the analysis of text is also necessary.
>>
>> So I think that Watson is not Narrow AI but its success depended on
>> its application to narrow kinds of problems.
>>
>> This analysis may be superficial but it gives me some insight about
>> what I want to work on. I will probably end up developing a semi-AI
>> program that can endlessly ruminate on my thoughts about some subject.
>> Jim Bromer
>>
>>
>> ------------------------------------------
>>
>
>>
>>
>>
>>
>> --
>>
> *Daniel J Lewis*
> * Senior Research Assistant (August 2015 - onwards) : University of South
> Wales
> * PhD Researcher (Sept 2012 - January 2015) : University of Bristol
> * Founder & Chair : Computational Intelligence Unconference Association
> - Email: [email protected]
> - Tel (Mobile): +44 (0) 7834355516
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to