On Fri, Nov 02, 2007 at 11:27:08AM +0300, Vladimir Nesov wrote:
> Linas,
> 
> Yes, you probably can code all the patterns you need. But it's only
> the tip of the iceberg: problem is that for those 1M rules there are
> also thousands that are being constantly generated, assessed and
> discarded. Knowledge formation happens all the time and adapts those
> 1M rules to gazillion of real-world situations. You can consider those
> additional rules 'inference', but then again, if you can do your
> inference that good, you can do without 1M of hand-coded rules,
> allowing system to learn them from the ground up. If your inference is
> not good enough, it's not clear how many rules you'd need to code in
> manually, it may be 10^6 or 10^12, or 10^30, because you'd also need
> to code _potential_ rules which are not normally stored in human
> brain, but generated on the fly.

Yes. agreed. Right now, I'm looking at all of the code as disposable
scaffolding, as something that might allow enough interaction to make
human-like conversation bearable.  That scaffolding should enable 
some "real" work.

My current impression is that opencyc's 10^6 assertions make it vaguely 
comparable to a 1st grader: at least, conversationally .. it can make 
simple deductions, write short essays of facts, ... can learn new things, 
but can go astray easily.

Does not yet learn about new sentence types, can't yet guess at new
parses.  Certainly doesn't have spunk or initiative!

Inference is tricky. Even simple things use alarmingly large amounts 
of cpu time.

> I plan to support recent context through a combination of stable
> activation patterns (which is analogous to constant reciting of
> certain phrase, only on lower level) and temporary induction (roughly,
> cooccurrence of concepts in near past leads to them activating each
> other in the present, and similarly there are temporary concepts being
> formed all the time, of which only those which get repeatedly used in
> their short lifetime are retained for longer and longer).

Yes. of course.  Easy to say.  lillybot remembers recent assertions,
and can reason from that.  However, I'm currently hard-coding all 
reasoning on a case-by-case, ad-hoc manner.  I haven't done enough 
of these yet to see what the general pattern might be.

--linas

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60483902-4e962b

Reply via email to