Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
One thing I don't get, YKY, is why you think you are going to take
textbook methods that have already been shown to fail, and somehow
make them work.  Can't you see that many others have tried to use
FOL and ILP already, and they've run into intractable combinatorial
explosion problems?

Some may argue that my approach isn't radical **enough** (and in spite
of my innate inclination toward radicalism, I'm trying hard in my AGI work
to be no more radical than is really needed, out of a desire to save time/
effort by reusing others' insights wherever  possible) ... but at least I'm
introducing a host of clearly novel technical ideas.

What you seem to be suggesting is just to implement material from
textbooks on a large knowledge base.

Why do you think you're gonna make it work?  Because you're gonna
build a bigger KB than Cyc has built w/ their 20 years of effort and
tens to hundreds of million of dollars of US gov't funding???

-- Ben G

On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully documented but I'm actively working on
the docs now).

I wonder why you don't join Stephen Reed on the texai project?  Is it
because you don't like the open-source nature of his project?

ben

On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

 What you seem to be suggesting is just to implement material from
 textbooks on a large knowledge base.

 Why do you think you're gonna make it work?  Because you're gonna
 build a bigger KB than Cyc has built w/ their 20 years of effort and
 tens to hundreds of million of dollars of US gov't funding???

 -- Ben G

 On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 If men cease to believe that they will one day become gods then they
 will surely become worms.
 -- Henry Miller




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Also, YKY, I can't help but note that your currently approach seems
 extremely similar to Texai (which seems quite similar to Cyc to me),
 more so than to OpenCog Prime (my proposal for a Novamente-like system
 built on OpenCog, not yet fully documented but I'm actively working on
 the docs now).

 I wonder why you don't join Stephen Reed on the texai project?  Is it
 because you don't like the open-source nature of his project?

You have built an AGI enterprise (at least, on the way to it).  Often
the *people* matter more than the technology.  I *need* to collaborate
with the community in order to win.  And vice versa.  Texai is closer
to my theory but you have a bigger community.  I don't have the
resources to rebuild the infrastructure that you have, eg the virtual
reality embodiment etc.

Opensource is such a thorny issue.  I don't have a clear idea yet...

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
Hi Ben,

Note that I did not pick FOL as my starting point because I wanted to
go against you, or be a troublemaker.  I chose it because that's what
the textbooks I read were using.  There is nothing personal here.
It's just like Chinese being my first language because I was born in
China.  I don't speak bad English just to sound different.

I think the differences in our approaches are equally superficial.  I
don't think there is a compelling reason why your formalism is
superior (or inferior, for that matter).

You have domain-specific heuristics;  I'm planning to have
domain-specific heuristics too.

The question really boils down to whether we should collaborate or
not.  And if we want meaningful collaboration, everyone must exert a
little effort to make it happen.  It cannot be one-way.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 1) representing uncertainties in a way that leads to tractable, meaningful
 logical manipulations.  Indefinite probabilities achieve this.  I'm not saying
 they're the only way to achieve this, but I'll argue that single-number,
 Walley-interval, fuzzy, or full-pdf approaches are not adequate for various
 reasons.

First of all, the *tractability* of your algorithm depends on
heuristics that you design, which are separable from the underlying
probabilistic logic calculus.  In your mind, these 2 things may be
mixed up.

Indefinite probabilities DO NOT imply faster inference.
Domain-specific heuristics do that.

Secondly, I have no problem at all, with using your indefinite
probability approach.

It's a laudable achievement what you've accomplished.

Thirdly, probabilistic logics -- of *any* flavor -- should
[approximately] subsume binary logic if they are sound.  So there is
no reason why your logic is so different that it cannot be expressed
in FOL.

Fourthly, the approach that I'm more familiar with is interval
probability.  I acknowledge that you have gone further in this
direction, and that's a good thing.

 2) using inference rules that lead to relatively high-confidence uncertainty
 propagation.  For instance term logic deduction is better for uncertain
 inference than modus ponens deduction, as detailed analysis reveals

I believe term logic is translatable to FOL -- Fred Sommers mentioned
that in his book.

 3) propagating uncertainties meaningfully through abstract logical
 formulae involving nested quantifiers (we do this in a special way in PLN
 using third-order probabilities; I have not seen any other conceptually
 satisfactory solution)

Again, that's well done.

But are you saying that the same cannot be achieved using FOL?

 4) most critically perhaps, using uncertain truth values within inference
 control to help pare down the combinatorial explosion

Uncertain truth values DO NOT imply faster inference.  In fact, they
slow down inference wrt binary logic.

If your inference algorithm is faster than resolution, and it's sound
(so it subsumes binary logic), then you have found a faster FOL
inference algorithm.  But that's not true;  what you're doing is
domain-specific heuristics.

 How these questions are answered matters a LOT, and my colleagues
 and I spent years working on this stuff.  It's not a matter of converting
 between equivalent formalisms.

I think one can do
indefinite probability + FOL + domain-specific heuristics
just as you can do
indefinite probability + term logic + domain-specific heuristics
but it may cost an amount of effort that you're unwilling to pay.

This is a very sad situation...
YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

Calm down =)

I'll use domain-specific heuristics just as you do.  There's nothing
wrong with textbooks.

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

Yes, I acknowledge that you have novel ideas.  But do you really think
I'm so dumb that I ONLY use textbook ideas?  I try to integrate
existing methods.  My style of innovation is kind of subtle.

You have done something new, but not so new as to be in a totally
different dimension.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel

 As we have discussed a while back on the OpenCog mail list, I would like to
 see a RDF interface to some level of the OpenCog Atom Table.  I think that
 would suit both YKY and myself.  Our discussion went so far as to consider
 ways to assign URI's to appropriate atoms.

Yes, I still think that's a good idea and I'm fairly sure it will
happen this year... probably not too long after the code is considered
really ready for release...

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
 First of all, the *tractability* of your algorithm depends on
 heuristics that you design, which are separable from the underlying
 probabilistic logic calculus.  In your mind, these 2 things may be
 mixed up.

 Indefinite probabilities DO NOT imply faster inference.
 Domain-specific heuristics do that.

Not all heuristics for inference control are narrowly domain-specific

Some may be generally applicable across very broad sets of  domains,
say across all domains satisfying certain broad mathematical
properties such as similar theorems tend to have similar proofs.

So, I agree that indefinite probabilities themselves don't imply
faster inference.

However, we have some heuristics for (relatively) fast inference
control that we believe will apply across any domains satisfying
certain broad mathematical properties ... and that won't work with
traditional representations of uncertainty


 Secondly, I have no problem at all, with using your indefinite
 probability approach.

 It's a laudable achievement what you've accomplished.

 Thirdly, probabilistic logics -- of *any* flavor -- should
 [approximately] subsume binary logic if they are sound.  So there is
 no reason why your logic is so different that it cannot be expressed
 in FOL.

Yes of course it can be expressed in FOL ... it can be expressed in
Morse Code too, but I don't see a point to it ;-)  ... it could also be realized
via a mechanical contraption made of TinkerToys ... like Danny
Hillis's

http://www.ohgizmo.com/wp-content/uploads/2006/12/tinkertoycomputer_1.jpg

;-)


 But are you saying that the same cannot be achieved using FOL?


If you attach indefinite probabilities to FOL propositions, and create
indefinite probability formulas corresponding to standard FOL rules,
you will have a subset of PLN

But you'll have a hard time applying Bayes rule to FOL propositions
without being willing to assign probabilities to terms ... and you'll
have a hard time applying it to FOL variable expressions without doing
something that equates to assigning probabilities to propositions w.
unbound variables ... and like I said, I haven't seen any other
adequate way of propagating pdf's through quantifiers than the one we
use in PLN, though Halpern's book describes a lot of inadequate ways
;-)

 4) most critically perhaps, using uncertain truth values within inference
 control to help pare down the combinatorial explosion

 Uncertain truth values DO NOT imply faster inference.  In fact, they
 slow down inference wrt binary logic.

 If your inference algorithm is faster than resolution, and it's sound
 (so it subsumes binary logic), then you have found a faster FOL
 inference algorithm.  But that's not true;  what you're doing is
 domain-specific heuristics.

As noted above, the truth is somewhere inbetween.

You can find inference control heuristics that exploit general
mathematical properties of domains -- so they don't apply to ALL
domains, but nor are they specialized to any particular domain.

Evolution is like this in fact -- it's no good at optimizing random
fitness functions, but it's good at optimizing fitness functions
satisfying certain mathematical properties, regardless of the specific
domain they refer to

 I think one can do
indefinite probability + FOL + domain-specific heuristics
 just as you can do
indefinite probability + term logic + domain-specific heuristics
 but it may cost an amount of effort that you're unwilling to pay.

well we do both in PLN ... PLN is not a pure term logic...

 This is a very sad situation...

Oh ... I thought it was funny ... I suppose I'm glad I have a perverse
sense of humour ;-D

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Stephen Reed
Hi Ben.
Thanks for suggesting that YKY collaborate with Texai because of our similar 
approaches to knowledge representation.  I believe that Cyc's lack of AGI 
progress is not due to their choice of FOL but rather that Cycorp emphasizes 
the hand-crafting of commonsense knowledge about things while disfavoring skill 
acquisition.

Texai will test the hypothesis that Cyc-style FOL (i.e. a RDF compatible 
subset) can represent procedures sufficient to support a mechanism that learns 
knowledge and skills, by being taught by mentors using natural language.  My 
initial bootstrap subject domain choices are:

* lexicon acquisition (e.g. mapping WordNet synsets to OpenCyc-style 
terms)
* grammar rule acquisition
* Java program synthesis - to support skill acquisition and executionI 
believe that the crisp (i.e. certain or very near certain) KR for these domains 
will facilitate the use of FOL inference (e.g. subsumption) when I need it to 
supplement the current Texai spreading activation techniques for word sense 
disambiguation and relevance reasoning.

I expect that OpenCog will focus on domains that require probabilistic 
reasoning, e.g. pattern recognition, which I am postponing until Texai is far 
enough along that expert mentors can teach it the skills for probabilistic 
reasoning.

---

As we have discussed a while back on the OpenCog mail list, I would like to see 
a RDF interface to some level of the OpenCog Atom Table.  I think that would 
suit both YKY and myself.  Our discussion went so far as to consider ways to 
assign URI's to appropriate atoms.

 
Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 1:59:54 AM
Subject: Re: [agi] OpenCog's logic compared to FOL?

Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully documented but I'm actively working on
the docs now).

I wonder why you don't join Stephen Reed on the texai project?  Is it
because you don't like the open-source nature of his project?

ben

On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

 What you seem to be suggesting is just to implement material from
 textbooks on a large knowledge base.

 Why do you think you're gonna make it work?  Because you're gonna
 build a bigger KB than Cyc has built w/ their 20 years of effort and
 tens to hundreds of million of dollars of US gov't funding???

 -- Ben G

 On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 If men cease to believe that they will one day become gods then they
 will surely become worms.
 -- Henry Miller




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel

 You have done something new, but not so new as to be in a totally
 different dimension.

 YKY

I have some ideas more like that too but I've postponed trying to sell them
to others, for the moment ;-) ... it's hard enough to sell fairly basic stuff
like PLN ...

Look for some stuff on the applications of hypersets and division algebras
to endowing AGIs with free will and reflective awareness, maybe in
early 09 ...  ;)

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
Ben,

If we don't work out the correspondence (even approximately) between
FOL and term logic, this conversation would not be very fruitful.  I
don't even know what you're doing with PLN.  I suggest we try to work
it out here step by step.  If your approach really makes sense to me,
you will gain another helper =)   Also, this will be good for your
project's documentation.

I have some examples:

Eng:  Some philosophers are wise
TL:  +Philosopher+Wise
FOL:  philosopher(X) - wise(X)

Eng:  Romeo loves Juliet
TL:  +-Romeo* + (Loves +-Juliet*)
FOL:  loves(romeo, juliet)

Eng:  Women often have long hair
TL:  ?
FOL:  woman(X) - long_hair(X)

I know your term logic is slightly different from Fred Sommers'.  Can
you fill in the TL parts and also attach indefinite probabilities?

On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

Re assigning probabilties to terms...

Term in term logic is completely different from term in FOL.  I
guess terms in term logic roughly correspond to predicates or
propositions in FOL.  Terms in FOL seem to have no counterpart in term
logic..

Anyway there should be no confusion here.  Propositions are the ONLY
things that can have truth values.  This applies to term logic as well
(I just refreshed my memory of TL).  When truth values go from { 0, 1
} to [ 0, 1 ], we get single-value probabilistic logic.  All this has
a very solid and rigorous foundation, based on so-called model theory.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
Propositions are not the only things that can have truth values...

I don't have time to carry out a detailed mathematical discussion of
this right now...

We're about to (this week) finalize the PLN book draft ... I'll send
you a pre-publication PDF early next week and then you can read it and
we can argue this stuff after that ;-)

ben

On Wed, Jun 4, 2008 at 1:01 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Ben,

 If we don't work out the correspondence (even approximately) between
 FOL and term logic, this conversation would not be very fruitful.  I
 don't even know what you're doing with PLN.  I suggest we try to work
 it out here step by step.  If your approach really makes sense to me,
 you will gain another helper =)   Also, this will be good for your
 project's documentation.

 I have some examples:

 Eng:  Some philosophers are wise
 TL:  +Philosopher+Wise
 FOL:  philosopher(X) - wise(X)

 Eng:  Romeo loves Juliet
 TL:  +-Romeo* + (Loves +-Juliet*)
 FOL:  loves(romeo, juliet)

 Eng:  Women often have long hair
 TL:  ?
 FOL:  woman(X) - long_hair(X)

 I know your term logic is slightly different from Fred Sommers'.  Can
 you fill in the TL parts and also attach indefinite probabilities?

 On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

 Re assigning probabilties to terms...

 Term in term logic is completely different from term in FOL.  I
 guess terms in term logic roughly correspond to predicates or
 propositions in FOL.  Terms in FOL seem to have no counterpart in term
 logic..

 Anyway there should be no confusion here.  Propositions are the ONLY
 things that can have truth values.  This applies to term logic as well
 (I just refreshed my memory of TL).  When truth values go from { 0, 1
 } to [ 0, 1 ], we get single-value probabilistic logic.  All this has
 a very solid and rigorous foundation, based on so-called model theory.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Propositions are not the only things that can have truth values...

Terms in term logic can have truth values.  But such terms
correspond to propositions in FOL.  There is absolutely no confusion
here.

 I don't have time to carry out a detailed mathematical discussion of
 this right now...

 We're about to (this week) finalize the PLN book draft ... I'll send
 you a pre-publication PDF early next week and then you can read it and
 we can argue this stuff after that ;-)

Thanks alot =)

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re : [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Bruno Frandemiche
hello ben
if i can have a pdf draf,i think you very much
bruno


- Message d'origine 
De : Ben Goertzel [EMAIL PROTECTED]
À : agi@v2.listbox.com
Envoyé le : Mardi, 3 Juin 2008, 18h33mn 02s
Objet : Re: [agi] OpenCog's logic compared to FOL?

Propositions are not the only things that can have truth values...

I don't have time to carry out a detailed mathematical discussion of
this right now...

We're about to (this week) finalize the PLN book draft ... I'll send
you a pre-publication PDF early next week and then you can read it and
we can argue this stuff after that ;-)

ben

On Wed, Jun 4, 2008 at 1:01 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Ben,

 If we don't work out the correspondence (even approximately) between
 FOL and term logic, this conversation would not be very fruitful.  I
 don't even know what you're doing with PLN.  I suggest we try to work
 it out here step by step.  If your approach really makes sense to me,
 you will gain another helper =)   Also, this will be good for your
 project's documentation.

 I have some examples:

 Eng:  Some philosophers are wise
 TL:  +Philosopher+Wise
 FOL:  philosopher(X) - wise(X)

 Eng:  Romeo loves Juliet
 TL:  +-Romeo* + (Loves +-Juliet*)
 FOL:  loves(romeo, juliet)

 Eng:  Women often have long hair
 TL:  ?
 FOL:  woman(X) - long_hair(X)

 I know your term logic is slightly different from Fred Sommers'.  Can
 you fill in the TL parts and also attach indefinite probabilities?

 On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

 Re assigning probabilties to terms...

 Term in term logic is completely different from term in FOL.  I
 guess terms in term logic roughly correspond to predicates or
 propositions in FOL.  Terms in FOL seem to have no counterpart in term
 logic..

 Anyway there should be no confusion here.  Propositions are the ONLY
 things that can have truth values.  This applies to term logic as well
 (I just refreshed my memory of TL).  When truth values go from { 0, 1
 } to [ 0, 1 ], we get single-value probabilistic logic.  All this has
 a very solid and rigorous foundation, based on so-called model theory.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


__
Do You Yahoo!?
En finir avec le spam? Yahoo! Mail vous offre la meilleure protection possible 
contre les messages non sollicités 
http://mail.yahoo.fr Yahoo! Mail 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:


  I believe that the crisp (i.e. certain or very near certain) KR for these
 domains will facilitate the use of FOL inference (e.g. subsumption) when I
 need it to supplement the current Texai spreading activation techniques for
 word sense disambiguation and relevance reasoning.

 I expect that OpenCog will focus on domains that require probabilistic
 reasoning, e.g. pattern recognition, which I am postponing until Texai is
 far enough along that expert mentors can teach it the skills for
 probabilistic reasoning.



Your approach is sensible, indeed similar to mine -- I'm also experimenting
with crisp logic only.  But there are 2 problems:

1.  Probabilistic inference cannot be grafted onto crisp logic easily.
The changes may be so great that much of the original work will be rendered
useless.

2.  You think we can do program synthesis with crisp logic only?  This has
profound implications if true...

YKY



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Do you have any insights on how this learning will be done?

That research area is known as ILP (inductive logic programming).
It's very powerful in the sense that almost anything (eg, any Prolog
program) can be learned that way.  But the problem is that the
combinatorial explosion is so great that you must use heuristics and
biases.  So far no one has applied it to large-scale commonsense
learning.  Some Cyc people have experimented with it recently.

  Cyc put a lot of effort into a natural language interface and failed.  What 
 approach will you use that they have not tried?  FOL requires a set of 
 transforms, e.g.

 All men are mortal - forall X, man(X) - mortal(X) (hard)
 Socrates is a man - (man(Socrates) (hard)
 - mortal(Socrates) (easy)
 - Socrates is mortal (hard).

 We have known for a long time how to solve the easy parts.  The hard parts 
 are AI-complete.  You have to solve AI before you can learn the knowledge 
 base.  Then after you build it, you won't need it.  What is the point?


We don't need 100% perfect NLP ability to learn the KB.  An NL
interface that can accept a simple subset of English will do.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Stephen Reed
YKY said:


1. Probabilistic inference cannot be grafted onto crisp logic easily.  The 
changes may be so great that much of the original work will be rendered useless.

Agreed.   However, I hope that by the time probabilistic inference is taught to 
Texai by mentors, it will be easy to supersede useless skills with correct ones.


2.  You think we can do program synthesis with crisp logic only?  This has 
profound implications if true...

All of the work to date on program generation, macro processing, application 
configuration via parameters, compilation, assembly, and program optimization 
has used crisp knowledge representation (i.e. non-probabilistic data 
structures).  Dynamic, feedback based optimizing compilers, such as the Java 
HotSpot VM, do keep track of program path statistics in order to decide when to 
inline methods for example.  But on the whole, the traditional program 
development life cycle is free of probabilistic inference.

I have a hypothesis that program design (to satisfy requirements), and in 
general engineering design, can be performed using crisp knowledge 
representation - with the provision that I will use cognitively-plausible 
spreading activation instead of, or to cache, time-consuming deductive 
backchaining.  My current work will explore this hypothesis with regard to 
composing simple programs that compose skills from more primitive skills.   I 
am adapting Gerhard Wickler's Capability Description Language to match 
capabilities (e.g. program composition capabilities) with tasks (e.g. clear a 
StringBuilder object).  CDL conveniently uses a crisp FOL knowledge 
representation.   Here is a Texai behavior language file that contains 
capability descriptions for primitive Java compositions.  Each of these 
primitive capabilities is implemented by a Java object that can be persisted in 
the Texai KB as RDF statements.

Like yourself, I find the profound implications of automatic programming 
fascinating.  I can only hope that this fascination has guided me down the 
right path to AGI, rather than down a dead end.  I've written a brief blog post 
on this and related AI-hard problems.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 12:20:19 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?


On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:
 
I believe that the crisp (i.e. certain or very near certain) KR for these 
domains will facilitate the use of FOL inference (e.g. subsumption) when I need 
it to supplement the current Texai spreading activation techniques for word 
sense disambiguation and relevance reasoning.

I expect that OpenCog will focus on domains that require probabilistic 
reasoning, e.g. pattern recognition, which I am postponing until Texai is far 
enough along that expert mentors can teach it the skills for probabilistic 
reasoning.
 
 
Your approach is sensible, indeed similar to mine -- I'm also experimenting 
with crisp logic only.  But there are 2 problems:
 
1.  Probabilistic inference cannot be grafted onto crisp logic easily.  The 
changes may be so great that much of the original work will be rendered useless.
 
2.  You think we can do program synthesis with crisp logic only?  This has 
profound implications if true...
YKY 


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:


  All of the work to date on program generation, macro processing,
 application configuration via parameters, compilation, assembly, and program
 optimization has used crisp knowledge representation (i.e. non-probabilistic
 data structures).  Dynamic, feedback based optimizing compilers, such as the
 Java HotSpot VM, do keep track of program path statistics in order to decide
 when to inline methods for example.  But on the whole, the traditional
 program development life cycle is free of probabilistic inference.


How about these scenarios:

1.  If a task is to be repeated 'many' times, use a loop.  If only 'a few'
times, write it out directly.  -- this requires fuzziness

2.  The gain of using algorihtm X on this problem is likely to be small.
-- requires probability


 I have a hypothesis that program design (to satisfy requirements), and in
 general engineering design, can be performed using crisp knowledge
 representation - with the provision that I will use cognitively-plausible
 spreading activation instead of, or to cache, time-consuming deductive
 backchaining.  My current work will explore this hypothesis with regard to
 composing simple programs that compose skills from more primitive skills.
 I am adapting Gerhard Wickler's Capability Description 
 Languagehttp://www.aiai.ed.ac.uk/oplan/cdl/index.htmlto match capabilities 
 (e.g. program composition capabilities) with tasks
 (e.g. clear a StringBuilder object).  CDL conveniently uses a crisp FOL
 knowledge representation.   
 Herehttp://texai.svn.sourceforge.net/viewvc/texai/BehaviorLanguage/data/method-definitions.bl?view=markupis
  a Texai behavior language file that contains capability descriptions for
 primitive Java compositions.  Each of these primitive capabilities is
 implemented by a Java object that can be persisted in the Texai KB as RDF
 statements.



Maybe you mean spreading activation is used to locate candidate facts /
rules, over which actual deductions are attempted?  That sounds very
promising.  One question is how to learn the association between nodes.

YKY



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Stephen Reed
YKY said:

How about these scenarios:
 
1.  If a task is to be repeated 'many' times, use a loop.  If only 'a few' 
times, write it out directly.  -- this requires fuzziness
 
2.  The gain of using algorithm X on this problem is likely to be small.  -- 
requires probability


Agreed.  When Texai gets to this point I would incorporate an open source fuzzy 
logic library such as JFuzzyLogic. I believe I can interface the Texai KB to a 
fuzzy logic library without too much difficulty.


Maybe you mean spreading activation is used to locate candidate facts / rules, 
over which actual deductions are attempted?  That sounds very promising.  One 
question is how to learn the association between nodes.


To be clear, I would do the opposite.  Offline backchaining, deductive 
inference could be performed to cache conclusions for common inference 
problems.  The cache is implemented via spreading activation links between the 
antecedent terms of the rules and the consequent terms of the conclusions.  
Humans do not perform modus ponens deduction from first principles for 
commonsense problem solving.  I believe that spreading activation can be 
employed to perform machine problem solving (e.g. executing a learned 
procedure) in a cognitively plausible fashion without real-time theorem proving.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 5:29:07 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?


On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
 
All of the work to date on program generation, macro processing, application 
configuration via parameters, compilation, assembly, and program optimization 
has used crisp knowledge representation (i.e. non-probabilistic data 
structures).  Dynamic, feedback based optimizing compilers, such as the Java 
HotSpot VM, do keep track of program path statistics in order to decide when to 
inline methods for example.  But on the whole, the traditional program 
development life cycle is free of probabilistic inference.
 
How about these scenarios:
 
1.  If a task is to be repeated 'many' times, use a loop.  If only 'a few' 
times, write it out directly.  -- this requires fuzziness
 
2.  The gain of using algorihtm X on this problem is likely to be small.  -- 
requires probability
 
I have a hypothesis that program design (to satisfy requirements), and in 
general engineering design, can be performed using crisp knowledge 
representation - with the provision that I will use cognitively-plausible 
spreading activation instead of, or to cache, time-consuming deductive 
backchaining.  My current work will explore this hypothesis with regard to 
composing simple programs that compose skills from more primitive skills.   I 
am adapting Gerhard Wickler's Capability Description Language to match 
capabilities (e.g. program composition capabilities) with tasks (e.g. clear a 
StringBuilder object).  CDL conveniently uses a crisp FOL knowledge 
representation.   Here is a Texai behavior language file that contains 
capability descriptions for primitive Java compositions.  Each of these 
primitive capabilities is implemented by a Java object that can be persisted in 
the Texai KB as RDF statements.
 
 
Maybe you mean spreading activation is used to locate candidate facts / rules, 
over which actual deductions are attempted?  That sounds very promising.  One 
question is how to learn the association between nodes.
YKY


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Ben Goertzel
 I think it's fine that you use the term atom in your own way.  The
 important thing is, whatever the objects that you attach probabilities
 to, that class of objects should correspond to *propositions* in FOL.
 From there it would be easier for me to understand your ideas.

Well, no, we attach probabilities to terms as well as to relationships
... and to expressions with free as well as bound variables...

You can map terms and free-variable expressions into propositions if
you want to, though...

for instance the term

cat

has probability

P(cat)

which you could interpret as

P(x is a cat | x is in my experience base)

and the free-variable expression

eats(x, mouse)

has probability

P( eats(x,mouse) )

which can be interpreted as

P( eats(x,mouse) is true | x is in my experience base)

However these propositional representations are a bit awkward and are
not the way to represent things for the PLN rules to be simply
applied... it is nicer by far to leave the experiential semantics
implicit...

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
On 6/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 eats(x, mouse)

That's a perfectly legitimate proposition.  So it is perfectly OK to write:

 P( eats(x,mouse) )

Note here that I assume your mouse refers to a particular instance
of a mouse, as in:

eats(X, mouse_1234)

What's confusing is:

 for instance the term

 cat

 has probability

 P(cat)

 P(x is a cat | x is in my experience base)

In FOL, the term term means either a constant, a variable, or a
function applied to a tuple of other terms.  In other words, terms in
FOL are objects, not propositions.

Examples of terms in FOL:

stray_cat_1234
mary_queen_of_scots
X
mother(X)
mother(mary_queen_of_scots)
etc...

If you want to say

P ( X is a cat | X is in my experience base )

the corresponding FOL proposition should be:

cat(X)

instead of

cat.

I think your notation of cat translates to cat(X) in FOL.

Your experience base may contain an instances such as:

cat( stray_cat_1234 )
female( mary_queen_of_scots )
eats( cat_4567, mouse_890 )
etc...

 You can map terms and free-variable expressions into propositions if
 you want to, though...

It's a bit confusing to map OpenCog terms to FOL propositions.  IMO
terms should not have probabilities attached to them.  Anyway let me
just leave that decision to you.  No more comments.

 However these propositional representations are a bit awkward and are
 not the way to represent things for the PLN rules to be simply
 applied... it is nicer by far to leave the experiential semantics
 implicit...

I'm interested to see how this is done.

1.  The contents of your experience base can be translated to FOL.
2.  Reasoning algorithms in FOL such as resolution are known to be
quite complex and slow.
3.  You claim that your reasoning algorithm is faster.
4.  That means, you've found a heuristic to reason quickly in FOL
(assuming your results can be translated back to FOL in polynomial
time).

More likely though, is that your algorithm is incomplete wrt FOL, ie,
there may be some things that FOL can infer but PLN can't.  Either
that, or your algorithm may be actually slower than FOL.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
Well, it's still difficult for me to get a handle on how your logic
works, I hope you will provide some info in your docs, re the
correspondence between FOL and PLN.

I think it's fine that you use the term atom in your own way.  The
important thing is, whatever the objects that you attach probabilities
to, that class of objects should correspond to *propositions* in FOL.
From there it would be easier for me to understand your ideas.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
On 6/2/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 YKY, how are you going to solve the natural language interface problem?  You 
 seem to be going down the same path as CYC.  What is different about your 
 system?

One more point:

Yes, my system is similar to Cyc in that it's logic-based.  But of
course, it will be augmented with probabilities and fuzziness, in some
ways yet to be figured out.

I guess your idea is that the language model should be the basis of
the AGI, whereas my idea is that AGI should be based on logical
representation.  The difference may not be as great as you think.

You may think that natural language is fluid and therefore more
suitable for AGI as compared to logic.  Let me point out that logic,
equipped with learning, can be equally fluid.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread YKY (Yan King Yin)
Ben,

I should not say that FOL is the standard of KR, but that it's
merely more popular.  I think researchers ought to be free to explore
whatever they want.

Can we simply treat PLN as a black box, so you don't have to explain
its internals, and just tell us what are the input and output format?

The ideal is to have everyone work on the same KR, but if that's
unattainable, the next best thing is to enable different modules to
interoperate as easily as possible...

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Ben Goertzel
 More likely though, is that your algorithm is incomplete wrt FOL, ie,
 there may be some things that FOL can infer but PLN can't.  Either
 that, or your algorithm may be actually slower than FOL.

FOL is not an algorithm, it:s a representational formalism...

As compared to standard logical theorem-proving algorithms, the design
intention is that Novamente/OpenCogs inference algorithms will be
vastly more efficient on the average case for those inference problems
typically confronting an embodied social organism.

Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Matt Mahoney
--- On Mon, 6/2/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

  YKY, how are you going to solve the natural language
 interface problem?  You seem to be going down the same path
 as CYC.  What is different about your system?
 
 One more point:
 
 Yes, my system is similar to Cyc in that it's logic-based.  But of
 course, it will be augmented with probabilities and
 fuzziness, in some ways yet to be figured out.

I believe NARS models probabilities and uses induction to adjust them.  
However, NARS is years from design completion, and then there is the small 
matter of building the knowledge base (ala Cyc).

 I guess your idea is that the language model should be the basis of
 the AGI, whereas my idea is that AGI should be based on logical
 representation.  The difference may not be as great as you think.
 
 You may think that natural language is fluid and therefore more
 suitable for AGI as compared to logic.  Let me point out that logic,
 equipped with learning, can be equally fluid.

Do you have any insights on how this learning will be done?  Cyc put a lot of 
effort into a natural language interface and failed.  What approach will you 
use that they have not tried?  FOL requires a set of transforms, e.g.

All men are mortal - forall X, man(X) - mortal(X) (hard)
Socrates is a man - (man(Socrates) (hard)
- mortal(Socrates) (easy)
- Socrates is mortal (hard).

We have known for a long time how to solve the easy parts.  The hard parts are 
AI-complete.  You have to solve AI before you can learn the knowledge base.  
Then after you build it, you won't need it.  What is the point?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Jiri Jelinek
YKY,

Can you give an example of something expressed in PLN that is very
hard or impossible to express in FOL?

FYI, I recently run into some issues with my [under-development]
formal language (which is being designed for my AGI-user
communication) when trying to express statements like:

John said that if he knew yesterday what he knows today, he wouldn't
do what he did back then.

The difficulty might have been specific to my design (because of
certain way of semantic meta-data handling), but I thought I would
share it just in case.

Best,
Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-01 Thread Ben Goertzel
 Here are some examples in FOL:

 Mary is female
female(mary)

Could be

Inheritance Mary female

or

Evaluation female mary

(the latter being equivalent to female(mary) )

but none of these has an uncertain truth value attached...


 This is a [production] rule:  (not to be confused with an inference rule)
 A female child is called a daughter
daughter(X) - child(X)  female(X)
 where universal quantification is assumed.

You could say

ForAll $X
   ExtensionalImplication
   And
   Evaluation child ($X)
   Evaluation female ($X)
   Evaluation daughter($X)

which is equivalent to the pred logic formulation
you've given.

But it will often be more useful to say

Implication
   And
   Evaluation child ($X)
   Evaluation female ($X)
   Evaluation daughter($X

which leaves the variable unbound, and which replaces the purely
extensional implication with an Implication that is mixed extensional
and intensional.

And one will normally want to attach an uncertain TV like an
indefinite probability to an expression like this, rather than leaving
it with a crisp TV.

The definition of

IntensionalImplication A B

is

ExtensionalImplication Prop(A) Prop(B)

where Prop(X) is the fuzzy set of properties of X

The definition of Implication is a weighted average of extensional and
intensional implication

I guess that gives a flavor of the difference

 *** bonus question ***
 Can you give an example of something expressed in PLN that is very
 hard or impossible to express in FOL?

FOL can express anything, as can combinatory logic and a load of other
Turing-complete formalisms.

However, expressing uncertainty is awkward and inefficient in FOL, as
opposed to if one uses a specific mechanism like indefinite truth
values.

Similarly, expressing intensional relationships is awkward and
inefficient in FOL as there is no built in notion of fuzzy sets of
properties

And there is no notion of assigning a truth value to a formula with
unbound variables in FOL, but one can work around this by using
variables that are universally bound to a context that is then itself
variable (again, more complex and awkward)

-- ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-01 Thread YKY (Yan King Yin)
Ben, Thanks for the answers.

One more question about the term atom used in OpenCog.

In logic an atom is a predicate applied to some arguments, for example:
   female(X)
   female(mary)
   female(mother(john))
   etc.

Truth values only apply to propositions, but they may consist of
only single atoms as above.  But still, there is a distinction.
Probabilities should only be attached to propositions, but not to
atoms (in logic).

Do OpenCog atoms roughly correspond to logical atoms?
And what is the counterpart of (logic) propositions in OpenCog?

I suggest don't use non-standard terminology 'cause it's very confusing...

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-01 Thread Ben Goertzel
 Do OpenCog atoms roughly correspond to logical atoms?

Not really

 And what is the counterpart of (logic) propositions in OpenCog?

ExtensionalImplication relations I guess...

 I suggest don't use non-standard terminology 'cause it's very confusing...

So long as it's well-defined, I guess it's OK...

The standard terminology leads in wrong conceptual directions alas...

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com