YKY said:

1. Probabilistic inference cannot be "grafted" onto crisp logic easily.  The 
changes may be so great that much of the original work will be rendered useless.

Agreed.   However, I hope that by the time probabilistic inference is taught to 
Texai by mentors, it will be easy to supersede useless skills with correct ones.


2.  You think we can do program synthesis with crisp logic only?  This has 
profound implications if true...

All of the work to date on program generation, macro processing, application 
configuration via parameters, compilation, assembly, and program optimization 
has used crisp knowledge representation (i.e. non-probabilistic data 
structures).  Dynamic, feedback based optimizing compilers, such as the Java 
HotSpot VM, do keep track of program path statistics in order to decide when to 
inline methods for example.  But on the whole, the traditional program 
development life cycle is free of probabilistic inference.

I have a hypothesis that program design (to satisfy requirements), and in 
general engineering design, can be performed using crisp knowledge 
representation - with the provision that I will use cognitively-plausible 
spreading activation instead of, or to cache, time-consuming deductive 
backchaining.  My current work will explore this hypothesis with regard to 
composing simple programs that compose skills from more primitive skills.   I 
am adapting Gerhard Wickler's Capability Description Language to match 
capabilities (e.g. program composition capabilities) with tasks (e.g. clear a 
StringBuilder object).  CDL conveniently uses a crisp FOL knowledge 
representation.   Here is a Texai behavior language file that contains 
capability descriptions for primitive Java compositions.  Each of these 
primitive capabilities is implemented by a Java object that can be persisted in 
the Texai KB as RDF statements.

Like yourself, I find the profound implications of automatic programming 
fascinating.  I can only hope that this fascination has guided me down the 
right path to AGI, rather than down a dead end.  I've written a brief blog post 
on this and related AI-hard problems.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



----- Original Message ----
From: YKY (Yan King Yin) <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 12:20:19 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?


On 6/3/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
 
I believe that the crisp (i.e. certain or very near certain) KR for these 
domains will facilitate the use of FOL inference (e.g. subsumption) when I need 
it to supplement the current Texai spreading activation techniques for word 
sense disambiguation and relevance reasoning.    

I expect that OpenCog will focus on domains that require probabilistic 
reasoning, e.g. pattern recognition, which I am postponing until Texai is far 
enough along that expert mentors can teach it the skills for probabilistic 
reasoning.
 
 
Your approach is sensible, indeed similar to mine -- I'm also experimenting 
with crisp logic only.  But there are 2 problems:
 
1.  Probabilistic inference cannot be "grafted" onto crisp logic easily.  The 
changes may be so great that much of the original work will be rendered useless.
 
2.  You think we can do program synthesis with crisp logic only?  This has 
profound implications if true...
YKY 

________________________________
 
agi | Archives  | Modify Your Subscription  


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to