Jim,

On Mon, Mar 25, 2013 at 12:36 PM, Jim Bromer <[email protected]> wrote:

> Steve,
> So you are defining a numerical system (like a vector system) using the
> most significant semantic units?
>

The ordinals are just pointers to lists of functions (via the lexicon) that
"do the job" in the computer that the words did in the minds of those
reading them. In addition to pointing to lists of functions, ordinals have
some other useful properties, like showing the relative frequency of use.

I could see how that (or some other numerical system that used a finite
> number of defined semantic units) might be very effective doing some kind
> of fundamental parsing. I don't think any systems like this would be very
> useful for general AGI because I think the system would have to be capable
> of learning millions of sub-cases,
>

Of course. Being able to skip the evaluation of the vast majority of these
sub-cases most of the time is what this concepts seeks to do.


> like how particular people use words in various circumstances.
>

Trying to figure out how specific people customize the language beyond the
typical subject-domain disambiguation challenges is a BIG problem that has
little payback. After all, if a person's writings aren't generally readable
unless you have read a LOT of what they have written, are their writings
really worth understanding? Further, without real-world attachment beyond
their own words, is understanding private dialects even possible?

Some people define meme-words, like "meme" (that my former employer William
Calvin coined that now even passes my spell checker) and "sheeple" (which
isn't common enough yet to pass my spell checker). However, beyond that,
the dual problems of not being worth the effort, and questionable
feasibility, put up a pretty good roadblock.

>
> Are you talking about something like that?
>

I was just referring to the concept that it is possible to describe concept
systems in hierarchical terms, and then turn around and compile that
description into triggered-analysis to selectively do things in a
completely different order to run MUCH faster. This applies to syntax
equations and MUCH more, like the entire logic net inside an AGI.

It looked to me like the speed ratio between a computer running around a
logic net and keeping everything up to date, and selectively evaluating
things in a minimalistic way by only analyzing things whose
least-frequently-used elements are asserted, would be many orders of
magnitude, like the difference between your PC and the largest
supercomputers that now exist. In short, this might be a key to making
something really interesting run on a PC. While this might sidestep the
speed issues, other more difficult issues remain to be addressed, before we
will have any AGIs running in people's laptops.

Steve
=====================

> On Mon, Mar 25, 2013 at 12:57 PM, Jim Bromer <[email protected]> wrote:
>
>> On Fri, Mar 22, 2013 at 6:16 PM, Steve Richfield <
>> [email protected]> wrote:
>>
>>> PM,
>>> Reading these, I can see that:
>>> 1.  Working with ordinals would speed this process up by more than an
>>> order of magnitude in performing *exactly* the same analysis, over
>>> working with character strings...
>>>
>>
>> Could you explain this kind of remark to me.  I haven't been able to
>> figure out a way to make any kind of numeric method work well over the
>> general kinds of relations that you'd expect to encounter in AGI.  If all
>> systems had a direct correspondence to a dimensional system then you could
>> get some traction out of these things.  Or, if general reasoning did not
>> need to rely both on intersections and simple arithmetic (or simple logic)
>> then numeric methods would be extremely efficient.
>>
>> Jim Bromer
>>
>>
>>
>> On Fri, Mar 22, 2013 at 6:16 PM, Steve Richfield <
>> [email protected]> wrote:
>>
>>> PM,
>>>
>>> Reading these, I can see that:
>>>
>>> 1.  Working with ordinals would speed this process up by more than an
>>> order of magnitude in performing *exactly* the same analysis, over
>>> working with character strings, e.g. in LISP.
>>>
>>> 2.  The first book describes a system that finds itself "in the weeds"
>>> with the first syntactical break in a sentence, where it "jumps to
>>> confusions" by presuming the next word to be the beginning of a new
>>> sentence (when more likely a presumed noun was missing), an issue that the
>>> second book apparently seeks to address.
>>>
>>> 3.   In dealing with the ontological and other subtle issues, the method
>>> described in the 2nd book will have to make the SAME tests that any other
>>> system would have to make to see if particular semantic structures are
>>> present. What good is it to avoid semantic structures, only to have to
>>> later analyze them?
>>>
>>> Note that WolframAlpha.com and DrEliza.com don't even bother "parsing"
>>> in the same sense as Hausser uses the term, and instead only work with
>>> identifiable semantic units. These applications have little use for such
>>> information. I have looked at what the big costs are in DrEliza.com from
>>> the lack of full parsing. The primary problem is that it is now blind to
>>> whether someone is describing their own problems, or someone else's
>>> problems. Also, when negation meets compound and complex sentence
>>> structure, the logic in DrEliza.com would more likely misunderstand it than
>>> get it right, and so I have disabled acting on such sentences.
>>>
>>> Note that improper multiple negation is SO common in everyday English
>>> that correct parsing is as likely as not to arrive at the wrong meaning.
>>>
>>> I think the "break" in this discussion is that almost everyone's
>>> ultimate goal is to identify the semantics, whereas this method identifies
>>> the syntax. A presumption has been made that parsing syntax is a necessary
>>> step on the way to recognizing semantics, which is clearly made here in
>>> rejecting the analysis of semantic structures. Unfortunately, semantics is
>>> EXACTLY what most applications need from a parser, so this "hole" must then
>>> be filled in later in the analysis, and filling this hole in will slow this
>>> approach down, exactly as it slows other approaches down.
>>>
>>> It is ALWAYS faster to skip the hard stuff, which is really great if you
>>> don't need it. I can see Hausser's approach working as part of a language
>>> translator, ESPECIALLY for scientific material like the Russian Academy of
>>> Sciences is now working on, where the translation wants to AVOID semantic
>>> analysis as much as possible. A computer can potentially only "understand"
>>> things that are already known, whereas the entire object of scientific
>>> papers is to explore the *UN*known.
>>>
>>> On a side note - note the copious spelling errors. There couldn't have
>>> been much review of this material. If the author can't even get his friends
>>> to read his writings...
>>>
>>> Did I miss anything?
>>>
>>> Steve
>>> =================
>>
>>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to