Aaron,

On Mon, Apr 1, 2013 at 1:46 PM, Aaron Hosford <[email protected]> wrote:

> Question answering, Jeopardy, and psychiatric applications each have their
>> own peculiar requirements.
>>
>
> So far, no one has even suggested a representation that serves this
>> disparate assortment of applications. Hence, either you must pick your
>> application (as I did in my patent application), or START by proposing a
>> representation that serves these disparate applications (which would be
>> REALLY valuable if you can figure this out).
>
>
>> In any event, I don't see any possibility of AGI pipe dreams ever
>> becoming reality, without some sort of representation that adequately
>> serves its claimed goals. Everyone in AGI seems to want to start at the
>> front end (parsing) without knowing where they are going. I STRONGLY
>> identifying the goal before planning a strategy to achieve it.
>
>
> I have been preaching "representation first" for some time now, but no one
> seems to be listening.
>

This may be a first for the AGI list - that two of us actually agree on
something!!!

On an earlier version of their English to Russian translator, the Russian
Academy of Sciences people found a curious way around representation. I
don't know if this is adaptable to AGI, but I thought it worthy of
mentioning.

They kept a file of over 200 sentence structures. Each new English sentence
that came in was compared with them to find the one that fit the best.
Then, a parallel Russian sentence structure was selected, and the words
were translated one-by-one and placed into their designated places in the
Russian sentence structure.

This had problems so they moved to an ontological approach, but it served
as a possible counterexample that representations were unavoidable.
Nonetheless, I think we will be better off with a robust representation.

I see a versatile cognitive representational scheme as being a fundamental
> to the successful construction of AGI. (How can a system "think" a thought
> if it can't represent its meaning internally?) I have a flexible internal
> representation for my system which has been refined to handle vagueness,
> incompleteness, ambiguity, multiple meanings, puns, contradictory
> statements, misstatements, approximations, connotations, different
> perspectives, etc., and a parser built around that representation. This
> representation is designed from the start with the intention of
> facilitating answering questions and otherwise interacting conversationally.
>

Answering questions is DIFFICULT, more difficult and generally less
valuable than finding solutions to problems.

The parser converts raw language into this internal representation,
> preserving the vagueness, etc., inherent therein, but encoded in a directly
> accessible way. The system is not (yet) intelligent, but I have most of the
> foundation laid to begin work on that. I think that even without true
> intelligence, the availability of an effective representation will make
> human language accessible to experimentation with algorithms that
> potentially have direct utility towards real world applications. So I fully
> agree with your emphasis on representation, but I am the counter-example to
> your statement that no one is doing this already.
>

GREAT. Have you written about your representation, so we can see what this
is all about? This sounds REALLY interesting.

Steve
=================

> On Sun, Mar 31, 2013 at 3:55 PM, Steve Richfield <
> [email protected]> wrote:
>
>> Jim,
>>
>> On Sun, Mar 31, 2013 at 7:40 AM, Jim Bromer <[email protected]> wrote:
>>
>>> Steve,
>>> I just read the first message in this thread.  Yes, successive layers of
>>> speech may be necessary to determine what is being referenced.  And of
>>> course having other information from other IO modalities about the referent
>>> would help. But first we have to figure out a way to do it with a computer.
>>>
>>
>> ... whatever "it" is, which is VERY different for problem solving
>> applications than for automatic language translation applications. In the
>> first, it appears you just want to know which (if any) of many specific
>> statements (or implications) of fact were made, and in the second, it
>> appears you need to parse, translate the parse structure to the new
>> language, and then drop in the translations of the words, after
>> disambiguating sufficiently to identify which translation you want.
>>
>> Question answering, Jeopardy, and psychiatric applications each have
>> their own peculiar requirements.
>>
>> So far, no one has even suggested a representation that serves this
>> disparate assortment of applications. Hence, either you must pick your
>> application (as I did in my patent application), or START by proposing a
>> representation that serves these disparate applications (which would be
>> REALLY valuable if you can figure this out).
>>
>> In any event, I don't see any possibility of AGI pipe dreams ever
>> becoming reality, without some sort of representation that adequately
>> serves its claimed goals. Everyone in AGI seems to want to start at the
>> front end (parsing) without knowing where they are going. I STRONGLY
>> identifying the goal before planning a strategy to achieve it.
>>
>>
>>> I think the idea I have in mind can be examined with examples.  When you
>>> mentioned "payload" I had a pretty good idea about how you wanted to use
>>> the word.  However, when you started off by saying that computerized text
>>> understanding is very similar to patent classification I realized that I
>>> did not have a good idea about the methodology of understanding that you
>>> were associating with the word.  So I inhibited myself from making any
>>> assumptions about the words that you were using until I had a chance to
>>> reread what you were saying.
>>>
>>> As I wrote, my new theory of understanding is based on acquiring insight
>>> into the specialization of the concepts that are being considered.  So,
>>> while your use of the term payload to refer to something that I might call
>>> the semantic content or the intended meaning was not that different from
>>> what I thought you meant, your underlying theory about discovering the
>>> meaning was.  Yes, if we don't understand something we need to speak about
>>> it with different kinds of remarks (or study it in other ways).  However, I
>>> disagree that these are successive layers where all the details of
>>> differentiation of speech can be found on some bottom-level subclass. I
>>> just don't think it is that simple.  My latest model of comprehension is
>>> that the complexity of knowledge is only found in a growing awareness of
>>> conceptual specialization (where concepts are either based on a group
>>> of shared concepts or from a process of observation and conjecture tied to
>>> some personal concepts.  In other words we can build higher models by
>>> communicating or by using our imaginations.) In my opinion, the basis of
>>> differentiation or specialization will not be found in some bottom-level
>>> subclass but in examining some construct of thought using different ways to
>>> think about it.
>>>
>>
>> My point (in regard to this) is that there is a large family of
>> bottom-level constructs that affect this, some combination of which will
>> guide differentiation, specialization, disambiguation, etc. By selectively
>> triggering the consideration of higher-level analyses based on the presence
>> of relevant bottom-level constructs, you have SOME chance of being able to
>> understand a full blown language definition in real time on modern-day
>> computers. Without this or some adequate substitute for it, you will fall
>> into the same performance trap that has consumed all prior attempts.
>>
>>>
>>> I now have a simplified model of what you were talking about in my
>>> mind.  I can generalize this.  I see you believe that there is a hierarchy
>>> of detail which would reveal the specialized meanings of words.  I will
>>> remember this about you and look for it in other ideas that you talk about
>>> and I will look for in other people's ideas. (I disagree with the single
>>> hierarchy, a general hierarchy of differentiation and the bottom-level full
>>> of details.  To my thinking these are metaphors which are standing in as
>>> substitutes for effective methods.)
>>>
>>
>> You wouldn't think a modern gigahertz computer could be trapped by lowly
>> text, but when you understand this, you can see how methods must be crafted
>> to fit the machines at hand.
>>
>>>
>>> I did not have a substantial disagreement with most of the other things
>>> you said in this, the first message of this thread.  The fact that you
>>> mentioned that an inability to precisely understand what someone said might
>>> lead to a mistaken misinterpretation of ignorance was a confirmation of my
>>> opinion that you can be open-minded and that you are able to use this
>>> ability to discover a context of misunderstanding in ways that some people
>>> are not.
>>>
>>
>> More to the point in DrEliza.com, some of its analysis is to identify
>> where the user is ignorant in important ways that can be remedied with a
>> well-targeted explanation.
>>
>>
>>> By being open minded one can see possibilities that closed-minded people
>>> may miss.
>>>
>>
>> Often all it takes is deeper domain-specific knowledge, part of which is
>> understanding popular misconceptions.
>>
>>
>>> This is an example of a conceptual specialization and it can be tied
>>> into the meaning of language even when the language is not about the
>>> subject of being open close minded.
>>>
>>
>> YES. People presume that DrEliza.com is there to drill down into their
>> illnesses, and it is. The drill-down process is often impaired by common
>> misconceptions, that are MUCH easier to squarely address, than to leave in
>> place and try to work around.
>>
>> Steve
>> =============================
>>
>> On Thu, Mar 28, 2013 at 12:27 AM, Steve Richfield <
>> [email protected]> wrote
>>
>>> *Jim, et al,*
>>>
>>> *I'm starting a new thread with this...*
>>>
>>> It is my theory that computerized speech and text understanding has
>>> eluded developers for the past ~40 years, because of a lack of a
>>> fundamental understanding of the task, which turns out to be very similar
>>> to patent classification.
>>>
>>> When classifying a patent, successive layers of sub-classification are
>>> established, until only unique details distinguish one patent from another
>>> in the bottom-level subclass. When reviewing the sub-classifications that a
>>> particular patent is filed within, combined with the patent’s title, what
>>> the patent is all about usually becomes apparent to anyone skilled in the
>>> art.
>>>
>>> However, when a patent is filed into a different patent filing system,
>>> e.g. filed in a different country where the sub-classifications may be
>>> quite different, it may be possible that the claims overlap the claims of
>>> other patents, and/or unclaimed disclosure would be patentable in a
>>> different country.
>>>
>>> Similarly, when you speak or write, in your own mind, most of your words
>>> are there to place a particular “payload” of information into its proper
>>> context, much as patent disclosures place claims into the state of an art.
>>> However, your listeners or readers may have a very different context in
>>> which to file your words. They must pick and choose from your words in an
>>> effort to place some of your words into their own context. What they end up
>>> placing may not even be the “payload” you intended, but may be words you
>>> only meant for placement. Where no placement seems possible, they might
>>> simply ignore your words and file *you* as being ignorant or deranged.
>>>
>>> Many teachers have recorded a classroom presentation and transcribed the
>>> recording, only to be quite surprised at what they actually said, which can
>>> sometimes be the opposite of what they meant to say. Somehow the class
>>> understood what they meant to say, even though their statement was quite
>>> flawed. When you look at these situations, the placement words were
>>> adequate, though imperfect, but the payload was okay. Indeed, where another
>>> person’s world model is nearly identical to yours, very few placement words
>>> are needed, and so these words are often omitted in casual speech.
>>>
>>> These omitted words fracture the structure of around half of all
>>> sentences “in the wild”, rendering computerized parsing impossible. Major
>>> projects, like the Russian Academy of Science’s Russian Translator project,
>>> have wrestled with this challenge for more than a decade, with each new
>>> approach producing a better result. The results are still far short of
>>> human understanding due to the lack of a human-level domain context to
>>> guide the identification and replacement of omitted words.
>>>
>>> As people speak or write to a computer, the computer must necessarily
>>> have a *very* different point of view to even be useful. The computer
>>> must be able to address issues that you can not successfully address
>>> yourself, so its knowledge must necessarily exceed your own in its subject
>>> domain. This leads to some curious conclusions:
>>>
>>> 1.   Some of your placement words will probably be interpreted as
>>> “statements of ignorance” by the computer and so be processed as valuable
>>> payload to teach you.
>>>
>>> 2.  Some of your placement words will probably refer to things outside
>>> of the computer’s domain, and so must be ignored, other than being
>>> recognized as non-understandable restrictions on the payload, that may
>>> itself be impossible to isolate.
>>>
>>> 3.    Some of your intended “payload” words must serve as placement,
>>> especially for statements of ignorance.
>>>
>>> My invention seeks to intercept words written to other people who
>>> presumably have substantial common domain knowledge. Further, the computer
>>> seeks to compose human-appearing responses, despite its necessarily
>>> different point of view and lack of original domain knowledge. While this
>>> is simply not possible for the vast majority of writings, the computer can
>>> simply ignore everything that it is unable to usefully respond to.
>>>
>>> If you speak a foreign language, especially if you don’t speak it well,
>>> you will immediately recognize this situation as being all too common when
>>> listening to others with greater language skills than your own speaking
>>> among themselves. The best you can do is to quietly listen until some point
>>> in the conversation when you understand enough of what they are saying, and
>>> you have something useful to add to the conversation.
>>>
>>> Note the similarity to the advertising within the present Google Mail,
>>> where they select advertisements based upon the content of email that is
>>> being displayed. Had Google performed a deeper analysis they could probably
>>> eliminate ~99% of the ads as not relating to users’ needs and greatly
>>> improve the users’ experience, and customize the remaining 1% of the ads to
>>> precisely target the users.
>>>
>>> That is very much the goal with my invention, where the computer knows
>>> about certain products and solutions to common problems, etc., and scans
>>> the vastness of the Internet to find people whose words have stated or
>>> implied a need for things in the computer’s knowledge base, and have done
>>> so in terms that the computer can “understand”.
>>> Steve
>>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Full employment can be had with the stoke of a pen. Simply institute a
>> six hour workday. That will easily create enough new jobs to bring back
>> full employment.
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23050605-2da819ff> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com>
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to