Anastasios,

On Sun, Mar 31, 2013 at 6:47 PM, Anastasios Tsiolakidis <
[email protected]> wrote:

> On Sun, Mar 31, 2013 at 10:55 PM, Steve Richfield <
> [email protected]> wrote:
> Everyone in AGI seems to want to start at the front end (parsing) without
> knowing where they are going.
>

My point through the discussion you quoted from is that most people expect
things from NL "understanding" that are completely unachievable. Sure, you
can tease out a LOT of the sort of information you discuss below, but most
of it would come with Bayesian probabilities that aren't much better than
50%, and it wasn't at all obvious what to do with such soft data.

>
> It is difficult, for me at least, to follow these threads and make up my
> mind if you agree or disagree with each other, if you made up your own
> minds at least etc.
>

We have discussed a LOT of details, but I sense general agreement.


> But Steve seems to include again and again some inaccuracies.
> Specifically, I am not ready to count even a single failure of NLP or
> AGI-NLP
>

I have avoided naming names, but the literature is FULL of NL parsing and
"understanding" projects, many of which got to the point of demonstrating
interesting things, but then they faded away, instead of being populated
with rules and turned into products. After talking with some of these
people, and then running into my own brick wall in DrEliza.com, I decided
to find a better way.


> since the systems I am familiar with have tried everything except the most
> obvious (and difficult): to model agents with a mix of intricate biased and
> unbiased world models and intentions. Language without a minimum of two
> mental worlds and one "objective" world is nothing but mad ramblings.
>

Perhaps, but does it make sense to parallel this process to tease out this
information? The obvious answer is "yes", but there are a LOT of problems
doing this in real time.

Similarly, several of the AGI builders of the day, myself included, started
> away from parsing and closer to either the mental worlds and/or the
> objective one(s), and Ben for example is not in a hurry to focus on the
> front-end. Shame on us I'd say, since after decades of publications on
> summarization, disambiguation etc it was a 17 year old who cashed in his
> summarization service. As Steven mentioned before, the world could be a
> different place if a few of us here had multimillion dollar liquidity.
>

Yea, either you guys will start converting your IP to cash, or forever
remain closet AGI-seekers. AGI is WAY too big for any one person to ever
build. It would be a challenge for one person just to build and maintain
the parsing and disambiguation rules for everyday English, let alone all of
the OTHER things you would have to do to build an AGI. Without cash, you
will forever be wage slaves, while others build AGIs or whatever with your
efforts.

Then again, Yahoo slapped us all in the face by withdrawing Summly,
> presumably suggesting we are a bunch of losers and can neither improve upon
> nor match Summly's achievements in reasonable time.
>

Is Summly's algorithm described somewhere?

Note a quirk of law: It is conceivable that Summly had adopted my algorithm
but kept it proprietary. As such, Yahoo would have NO claim on the
technology, and their work would NOT count as prior art. It happens all the
time - people validly patent things that it turns out someone else has
already developed. These patents are fully enforceable.

These questions will soon be answered for my invention, because my
application has been "made special" (fast tracked).


> Or can we?
>

Again, the challenge with AGI is a lack of anything resembling a spec. It
is hard to design something to perform an undefined function.

However, my invention was NOT what to do, but how to do such things faster.
The combinatorial explosion from failed tests hangs over the head of all NL
"understanding" efforts. From what I can see, my method is the ONLY
presently known way of prospectively running fast enough, once the
rules/tables/DB are populated with all the information needed to process
everyday English (or other natural language).

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to