Roland, PM, etc.,

Popping our intellectual stack back up to the top and revisiting the origin
of this discussion...

1.  The technology of NL "understanding" seemed to be stalled because
processors were a few orders of magnitude too slow at performing failing
tests, so I proposed a solution. Others then advanced their own various
favorite methods as being better/faster, but these other methods hadn't
even gotten beyond character manipulation, let alone addressing the failing
test barrier. The present discussion doesn't seem to touch on this.

2.  #1 aside, we all understand the need for some sort of canonical way of
representing and processing NL, and of representing what was discovered in
the NL. If proposals are to have any chance of live use during our working
careers, it would seem that they MUST squarely address the failing test
barrier. The primary perceived problem with the method I proposed is that
it targeted a particular subclass of NL applications (problem solving) when
to be broadly applicable it must also address automated language
translation and AGI. It would seem that we should concentrate on either:
2a. merging other methods into my proposal to handle these other areas, or
2b. merging my methods into other proposals to speed them up.
Merging of some sort seems to be in our future, so we should be looking at
how this might be done.

The problem I perceived in Roland's proposal is its database-driven nature.
I tried this in DrEliza.com, and soon discovered that straight SQL was too
lightweight for the task. However, VB/Jet SQL allowed ANYTHING that could
be written in VB to be written as an SQL function, so I got over this hump,
only to run into the combinatorial explosion of tests to evaluate. Some
performance monitoring found that pretty much all of the time was spent in
VB/Jet SQL servicing my fancy Bayesian computation function.

The whole idea of databases is to directly access what you want, yet when
you find yourself computing probabilities, you must compute them for MANY
things, to figure out which are the interesting ones.

The hope and expectation is that with my method of triggered evaluation,
the high-level probability computations will only be performed for things
with SOME indication of their presence, which will require that at least
one LFU word for at least one semantic component must have appeared in the
text. Further, with everything done with rules in RAM, there won't be any
need to use database software, which was consuming considerably more time
than everything else combined.

VB/Jet SQL is my favorite AI language, because it is SO easy to describe
how to search or aggregate according to any rules you can imagine. However,
at just about the point where it gets interesting, it slows WAY down.

Steve
===========
On Mon, Apr 1, 2013 at 9:13 AM, Piaget Modeler <[email protected]>wrote:

>
> Hello Michael,
>
> Thank you for your email, your paper, and the discussion between
> Jim, Steven, and Boris.  It'll take me a few days to look at your
> paper, but before there are too many more contributions to the
> ongoing discussion let me respond to some items:
>
> 1.  Many teachers have recorded a classroom presentation and
>
>     transcribed the recording, only to be quite surprised at what
>     they actually said...
>
> This is very true of spoken language and requires what's called
> robustness.  In DBS it is supplied by the lowest level of pattern
> matching, which correlates the core values in the spoken text to
> corresponding contents in the database.  The amount of content
> coactivated in this way is reduced by inferencing.  (see Sect. 11
> in the paper, FoCL 6.1.2, CLaTR Sect. 5.4)
>
> 2. For all but AGI (that can't work for decades with any presently
>
>    known approach because of a lack of processor power) and automatic
>    language translation (that has a large interest in preserving the
>    speaker/writer's frame of mind), there seems to be little real-world
>    application for agent-oriented approaches.
>
> It seems to me that a computational theory of any kind should take
> care to be of low mathematical complexity (linear or at worst polynomial).
> As Garey and Johnson showed in 1979 (FoCL 8.2.2), an algorithm may be
> decidable, but if it is exponential it may take longer than the existence
> of the universe, currently estimated at 3.77 billion years.  So that
> wouldn't be helped by faster machines.
>
> As for applications of DBS, please see Sect. 13 in the paper.
>
> 3. Summarizing from what I read from the discussion between Jim,
>    Steve, and Boris, it seems an open question whether computers
>    can *understand* natural language and engage in meaningful
>    dialog.
>
> DBS takes the view that full understanding by a computer requires
> an agent with a body in the real world, with interface for recognition
> and action.  It can use the elementary recognition (e.g., red) and
> action procedures (e.g., take one step forward) as its basic concepts
> for building content, and reuse the associated types as the meaning
> of natural language content words.
>
> It is possible to move from such a talking robot to virtual agents
> which are essentially restricted to the keyboard and the screen of
> a standard computer.  However, as a consequence the virtual agents
> loose the procedures for autonomous recognition and action, and are
> thus reduced to core value *place holders* which are understood by
> the human users, but not by the machine.
>
> There are many applications for which virtual agents are sufficient.
> Also, they may make do with only the hear mode, leaving the think
> and the speak mode aside.  A general theory of how natural language
> works is nevertheless useful for such applications because it provides
> a framework which allows to make different applications compatible
> with each other.  Also, the framework may supply applications with
> off-the-shelf components like automatic word form recognition,
> syntactic-semantic parsers, etc., in different languages, resulting
> in further standardization and interoperability.
>
> Happy Easter to you all!
>
> Roland
>
>
> ------------------------------
> Date: Mon, 1 Apr 2013 02:35:44 -0700
>
> Subject: Re: [agi] Steve's placement/payload theory of language
> From: [email protected]
> To: [email protected]
>
>
> Anastasios,
>
> On Sun, Mar 31, 2013 at 6:47 PM, Anastasios Tsiolakidis <
> [email protected]> wrote:
>
> On Sun, Mar 31, 2013 at 10:55 PM, Steve Richfield <
> [email protected]> wrote:
> Everyone in AGI seems to want to start at the front end (parsing) without
> knowing where they are going.
>
>
> My point through the discussion you quoted from is that most people expect
> things from NL "understanding" that are completely unachievable. Sure, you
> can tease out a LOT of the sort of information you discuss below, but most
> of it would come with Bayesian probabilities that aren't much better than
> 50%, and it wasn't at all obvious what to do with such soft data.
>
>
> It is difficult, for me at least, to follow these threads and make up my
> mind if you agree or disagree with each other, if you made up your own
> minds at least etc.
>
>
> We have discussed a LOT of details, but I sense general agreement.
>
>
> But Steve seems to include again and again some inaccuracies.
> Specifically, I am not ready to count even a single failure of NLP or
> AGI-NLP
>
>
> I have avoided naming names, but the literature is FULL of NL parsing and
> "understanding" projects, many of which got to the point of demonstrating
> interesting things, but then they faded away, instead of being populated
> with rules and turned into products. After talking with some of these
> people, and then running into my own brick wall in DrEliza.com, I decided
> to find a better way.
>
>
> since the systems I am familiar with have tried everything except the most
> obvious (and difficult): to model agents with a mix of intricate biased and
> unbiased world models and intentions. Language without a minimum of two
> mental worlds and one "objective" world is nothing but mad ramblings.
>
>
> Perhaps, but does it make sense to parallel this process to tease out this
> information? The obvious answer is "yes", but there are a LOT of problems
> doing this in real time.
>
> Similarly, several of the AGI builders of the day, myself included,
> started away from parsing and closer to either the mental worlds and/or the
> objective one(s), and Ben for example is not in a hurry to focus on the
> front-end. Shame on us I'd say, since after decades of publications on
> summarization, disambiguation etc it was a 17 year old who cashed in his
> summarization service. As Steven mentioned before, the world could be a
> different place if a few of us here had multimillion dollar liquidity.
>
>
> Yea, either you guys will start converting your IP to cash, or forever
> remain closet AGI-seekers. AGI is WAY too big for any one person to ever
> build. It would be a challenge for one person just to build and maintain
> the parsing and disambiguation rules for everyday English, let alone all of
> the OTHER things you would have to do to build an AGI. Without cash, you
> will forever be wage slaves, while others build AGIs or whatever with your
> efforts.
>
> Then again, Yahoo slapped us all in the face by withdrawing Summly,
> presumably suggesting we are a bunch of losers and can neither improve upon
> nor match Summly's achievements in reasonable time.
>
>
> Is Summly's algorithm described somewhere?
>
> Note a quirk of law: It is conceivable that Summly had adopted my
> algorithm but kept it proprietary. As such, Yahoo would have NO claim on
> the technology, and their work would NOT count as prior art. It happens all
> the time - people validly patent things that it turns out someone else has
> already developed. These patents are fully enforceable.
>
> These questions will soon be answered for my invention, because my
> application has been "made special" (fast tracked).
>
>
>  Or can we?
>
>
> Again, the challenge with AGI is a lack of anything resembling a spec. It
> is hard to design something to perform an undefined function.
>
> However, my invention was NOT what to do, but how to do such things
> faster. The combinatorial explosion from failed tests hangs over the head
> of all NL "understanding" efforts. From what I can see, my method is the
> ONLY presently known way of prospectively running fast enough, once the
> rules/tables/DB are populated with all the information needed to process
> everyday English (or other natural language).
>
> Steve
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/19999924-5cfde295> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Full employment can be had with the stoke of a pen. Simply institute a six
hour workday. That will easily create enough new jobs to bring back full
employment.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to