YKY said:

How about these scenarios:
 
1.  "If a task is to be repeated 'many' times, use a loop.  If only 'a few' 
times, write it out directly."  -- this requires fuzziness
 
2.  "The gain of using algorithm X on this problem is likely to be small."  -- 
requires probability


Agreed.  When Texai gets to this point I would incorporate an open source fuzzy 
logic library such as JFuzzyLogic. I believe I can interface the Texai KB to a 
fuzzy logic library without too much difficulty.


Maybe you mean spreading activation is used to locate candidate facts / rules, 
over which actual deductions are attempted?  That sounds very promising.  One 
question is how to learn the association between nodes.


To be clear, I would do the opposite.  Offline backchaining, deductive 
inference could be performed to cache conclusions for common inference 
problems.  The cache is implemented via spreading activation links between the 
antecedent terms of the rules and the consequent terms of the conclusions.  
Humans do not perform modus ponens deduction from first principles for 
commonsense problem solving.  I believe that spreading activation can be 
employed to perform machine problem solving (e.g. executing a learned 
procedure) in a cognitively plausible fashion without real-time theorem proving.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



----- Original Message ----
From: YKY (Yan King Yin) <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 5:29:07 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?


On 6/4/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
 
All of the work to date on program generation, macro processing, application 
configuration via parameters, compilation, assembly, and program optimization 
has used crisp knowledge representation (i.e. non-probabilistic data 
structures).  Dynamic, feedback based optimizing compilers, such as the Java 
HotSpot VM, do keep track of program path statistics in order to decide when to 
inline methods for example.  But on the whole, the traditional program 
development life cycle is free of probabilistic inference.
 
How about these scenarios:
 
1.  "If a task is to be repeated 'many' times, use a loop.  If only 'a few' 
times, write it out directly."  -- this requires fuzziness
 
2.  "The gain of using algorihtm X on this problem is likely to be small."  -- 
requires probability
 
I have a hypothesis that program design (to satisfy requirements), and in 
general engineering design, can be performed using crisp knowledge 
representation - with the provision that I will use cognitively-plausible 
spreading activation instead of, or to cache, time-consuming deductive 
backchaining.  My current work will explore this hypothesis with regard to 
composing simple programs that compose skills from more primitive skills.   I 
am adapting Gerhard Wickler's Capability Description Language to match 
capabilities (e.g. program composition capabilities) with tasks (e.g. clear a 
StringBuilder object).  CDL conveniently uses a crisp FOL knowledge 
representation.   Here is a Texai behavior language file that contains 
capability descriptions for primitive Java compositions.  Each of these 
primitive capabilities is implemented by a Java object that can be persisted in 
the Texai KB as RDF statements.
 
 
Maybe you mean spreading activation is used to locate candidate facts / rules, 
over which actual deductions are attempted?  That sounds very promising.  One 
question is how to learn the association between nodes.
YKY

________________________________
 
agi | Archives  | Modify Your Subscription  


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to