Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Prolog is not fast, it is painfully slow for complex inferences due to using
 backtracking as a control mechanism

 The time-complexity issue that matters for inference engines is
 inference-control ... i.e. dampening the combinatorial explosion (which
 backtracking does not do)

 Time-complexity issues within a single inference step can always be handled
 via mathematical or code optimization, whereas optimizing inference control
 is a deep, deep AI problem...

 So, actually, the main criterion for the AGI-friendliness of an inference
 scheme is whether it lends itself to flexible, adaptive control via

 -- taking long-term, cross-problem inference history into account

 -- learning appropriately from noninferential cognitive mechanisms (e.g.
 attention allocation...)

(I've been busy implementing my AGI in Lisp recently...)

I think optimization of single inference steps and using global
heuristics are both important.

Prolog uses backtracking, but in my system I use all sorts of search
strategies, not to mention abduction and induction.  Also, currently
I'm using general resolution instead of SLD resolution, which is for
Horn clauses only.  But one problem I face is that when I want to deal
with equalities I have to use paramodulation (or some similar trick).
This makes things more complex and as you know, I don't like it!

I wonder if PLN has a binary-logic subset, or is every TV
probabilistic by default?

If you have a binary logic subset, then how does that subset differ
from classical logic?

People have said many times that resolution is inefficient, but I have
never seen a theorem that says resolution is slower than other
deduction methods such as natural deduction or tableaux.  All such
talk is based on anecdotal impressions.  Also, I don't see why other
deduction methods are that much different from resolution since their
inference steps correspond to resolution steps very closely.  Also, if
you can apply heuristics in other deduction methods you can do the
same with resolution.  All in all, I see no reason why resolution is
inferior.

So I'm wondering if there are some novel way of doing binary that
somehow makes inference faster than with classical logic.  And exactly
what is the price to be paid?  What aspects of classical logic are
lost?

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread Abram Demski
I'm in the process of reading this paper:

http://www.jair.org/papers/paper1410.html

It might answer a couple of your questions. And, it looks like it has
an interesting proposal about generating heuristics from the problem
description. The setting is boolean rather than firs-order. It
discusses the point about resolution being slow in practice.

--Abram Demski

On Tue, Sep 23, 2008 at 3:31 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Prolog is not fast, it is painfully slow for complex inferences due to using
 backtracking as a control mechanism

 The time-complexity issue that matters for inference engines is
 inference-control ... i.e. dampening the combinatorial explosion (which
 backtracking does not do)

 Time-complexity issues within a single inference step can always be handled
 via mathematical or code optimization, whereas optimizing inference control
 is a deep, deep AI problem...

 So, actually, the main criterion for the AGI-friendliness of an inference
 scheme is whether it lends itself to flexible, adaptive control via

 -- taking long-term, cross-problem inference history into account

 -- learning appropriately from noninferential cognitive mechanisms (e.g.
 attention allocation...)

 (I've been busy implementing my AGI in Lisp recently...)

 I think optimization of single inference steps and using global
 heuristics are both important.

 Prolog uses backtracking, but in my system I use all sorts of search
 strategies, not to mention abduction and induction.  Also, currently
 I'm using general resolution instead of SLD resolution, which is for
 Horn clauses only.  But one problem I face is that when I want to deal
 with equalities I have to use paramodulation (or some similar trick).
 This makes things more complex and as you know, I don't like it!

 I wonder if PLN has a binary-logic subset, or is every TV
 probabilistic by default?

 If you have a binary logic subset, then how does that subset differ
 from classical logic?

 People have said many times that resolution is inefficient, but I have
 never seen a theorem that says resolution is slower than other
 deduction methods such as natural deduction or tableaux.  All such
 talk is based on anecdotal impressions.  Also, I don't see why other
 deduction methods are that much different from resolution since their
 inference steps correspond to resolution steps very closely.  Also, if
 you can apply heuristics in other deduction methods you can do the
 same with resolution.  All in all, I see no reason why resolution is
 inferior.

 So I'm wondering if there are some novel way of doing binary that
 somehow makes inference faster than with classical logic.  And exactly
 what is the price to be paid?  What aspects of classical logic are
 lost?

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Tue, Sep 23, 2008 at 6:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
 I'm in the process of reading this paper:

 http://www.jair.org/papers/paper1410.html

 It might answer a couple of your questions. And, it looks like it has
 an interesting proposal about generating heuristics from the problem
 description. The setting is boolean rather than firs-order. It
 discusses the point about resolution being slow in practice.

First-order theorem proving is very different from propositional, the
techniques do not transfer there.  I'd be very delighted if you can
show a paper about a superior algorithm for first-order =)

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread Ben Goertzel
PLN can do inference on crisp-truth-valued statements ... and on this
subset, it's equivalent to ordinary predicate logic ...

About resolution and inference: resolution is a single inference step.  To
make a theorem-prover, you must couple resolution with some search
strategy.  For a search strategy, Prolog uses backtracking, which is
extremely crude.  My beef is not with resolution but with backtracking.

Another comment: even if one's premises and conclusion are
crisp-truth-valued, it may still be worthwhile to deal with
uncertain-truth-valued statements in the course of doing inference.
Guesses, systematically managed, may help on the way from definite premises
to definite conclusions...

ben g

On Tue, Sep 23, 2008 at 3:31 AM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Prolog is not fast, it is painfully slow for complex inferences due to
 using
  backtracking as a control mechanism
 
  The time-complexity issue that matters for inference engines is
  inference-control ... i.e. dampening the combinatorial explosion (which
  backtracking does not do)
 
  Time-complexity issues within a single inference step can always be
 handled
  via mathematical or code optimization, whereas optimizing inference
 control
  is a deep, deep AI problem...
 
  So, actually, the main criterion for the AGI-friendliness of an inference
  scheme is whether it lends itself to flexible, adaptive control via
 
  -- taking long-term, cross-problem inference history into account
 
  -- learning appropriately from noninferential cognitive mechanisms (e.g.
  attention allocation...)

 (I've been busy implementing my AGI in Lisp recently...)

 I think optimization of single inference steps and using global
 heuristics are both important.

 Prolog uses backtracking, but in my system I use all sorts of search
 strategies, not to mention abduction and induction.  Also, currently
 I'm using general resolution instead of SLD resolution, which is for
 Horn clauses only.  But one problem I face is that when I want to deal
 with equalities I have to use paramodulation (or some similar trick).
 This makes things more complex and as you know, I don't like it!

 I wonder if PLN has a binary-logic subset, or is every TV
 probabilistic by default?

 If you have a binary logic subset, then how does that subset differ
 from classical logic?

 People have said many times that resolution is inefficient, but I have
 never seen a theorem that says resolution is slower than other
 deduction methods such as natural deduction or tableaux.  All such
 talk is based on anecdotal impressions.  Also, I don't see why other
 deduction methods are that much different from resolution since their
 inference steps correspond to resolution steps very closely.  Also, if
 you can apply heuristics in other deduction methods you can do the
 same with resolution.  All in all, I see no reason why resolution is
 inferior.

 So I'm wondering if there are some novel way of doing binary that
 somehow makes inference faster than with classical logic.  And exactly
 what is the price to be paid?  What aspects of classical logic are
 lost?

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread Abram Demski
No transfer? This paper suggests otherwise:

http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf

-Abram Demski

On Tue, Sep 23, 2008 at 7:31 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 On Tue, Sep 23, 2008 at 6:59 PM, Abram Demski [EMAIL PROTECTED] wrote:
 I'm in the process of reading this paper:

 http://www.jair.org/papers/paper1410.html

 It might answer a couple of your questions. And, it looks like it has
 an interesting proposal about generating heuristics from the problem
 description. The setting is boolean rather than firs-order. It
 discusses the point about resolution being slow in practice.

 First-order theorem proving is very different from propositional, the
 techniques do not transfer there.  I'd be very delighted if you can
 show a paper about a superior algorithm for first-order =)

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED] wrote:
 No transfer? This paper suggests otherwise:

 http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf

Well, people know that propositional SAT is fast, so
propositionalization is a tempting heuristic, but as the paper's
abstract has stated, it can only apply to small domains.  AGI is
precisely a large-domain problem!

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
 On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED]

 No transfer? This paper suggests otherwise:

 http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf

Sorry, I replied too quickly...

This paper does contribute to solving FOL inference problems, but it
is still inadequate for AGI because the FOL is required to be
function-free.  If you remember programming in Prolog, we often use
functors within predicates.  My guess is that commonsense reasoning
would make use of such functors as well.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread YKY (Yan King Yin)
On Tue, Sep 23, 2008 at 9:20 PM, YKY (Yan King Yin)
 Sorry, I replied too quickly...

 This paper does contribute to solving FOL inference problems, but it
 is still inadequate for AGI because the FOL is required to be
 function-free.  If you remember programming in Prolog, we often use
 functors within predicates.  My guess is that commonsense reasoning
 would make use of such functors as well.

Well, even when FOL-with-functions can be converted to function-free
FOL, the blow-up may be too much for a commonsense KB.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-23 Thread Abram Demski
I don't know prolog's functors. But, I agree that the approach is
fundamentally limited, because it is restricted to finite domains.

-Abram Demski

On Tue, Sep 23, 2008 at 9:20 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 On Tue, Sep 23, 2008 at 9:00 PM, Abram Demski [EMAIL PROTECTED]

 No transfer? This paper suggests otherwise:

 http://www.cs.washington.edu/homes/pedrod/papers/aaai06b.pdf

 Sorry, I replied too quickly...

 This paper does contribute to solving FOL inference problems, but it
 is still inadequate for AGI because the FOL is required to be
 function-free.  If you remember programming in Prolog, we often use
 functors within predicates.  My guess is that commonsense reasoning
 would make use of such functors as well.

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-18 Thread Pei Wang
On Wed, Sep 17, 2008 at 10:54 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Pei,

 You are right, that does sound better than quick-and-dirty. And more
 relevant, because my primary interest here is to get a handle on what
 normative epistemology should tell us to conclude if we do not have
 time to calculate the full set of consequences to (uncertain) facts.

Fully understand. As far as uncertain reasoning is concerned, NARS
aims at a normative model that is optimal under certain restriction,
and in this sense it is not inferior to probability theory, but
designed under different assumptions. Especially, NARS is not an
approximation or a second-rate substitute for probability theory, just
as probability theory is not a second-rate substitute of binary logic.

 It is unfortunate that I had to use biased language, but probability
 is of course what I am familiar with... I suppose, though, that most
 of the terms could be roughly translated into NARS? Especially
 independence, and I should hope conditional independence as well.
 Collapsing probabilities can be restated as generally collapsing
 uncertainty.

From page 80 of my book: We call quantities mutually independent of
each other, when given the values of any of them, the remaining ones
cannot be determined, or even bounded approximately.

 Thanks for the links. The reason for singling out these three, of
 course, is that they have already been discussed on this list. If
 anybody wants to point out any others in particular, that would be
 great.

Understand. The UAI community used to be an interesting one, though in
recent years it has been too much dominated by the Bayesians, who
assume they already get the big picture right, and all the remain
issues are in the details. For discussions on the fundamental
properties of uncertain reasoning, I recommend the works of Henry
Kyburg and Susan Haack.

Pei

 --Abram

 On Wed, Sep 17, 2008 at 3:54 PM, Pei Wang [EMAIL PROTECTED] wrote:
 On Wed, Sep 17, 2008 at 1:46 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hi everyone,

 Most people on this list should know about at least 3 uncertain logics
 claiming to be AGI-grade (or close):

 --Pie Wang's NARS

 Yes, I heard of this guy a few times, who happens to use the same name
 for his project as mine. ;-)

 Here is my list:

 1. Well-defined uncertainty semantics (either probability theory or a
 well-argued alternative)

 Agree, and I'm glad that you mentioned this item first.

 2. Good at quick-and-dirty reasoning when needed
 --a. Makes unwarranted independence assumptions
 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning
 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution
 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

 As you admitted in the following, the language is biased. Using
 theory-neutral language, I'd say the requirement is to derive
 conclusions with available knowledge and resources only, which sounds
 much better than quick-and-dirty to me.

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning
 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above
 --b. Should have a repair algorithm based on that higher-order uncertainty

 As soon as you don't assume there is a model, this item and the
 above one become similar, which are what I called revision and
 inference, respectively, in
 http://www.cogsci.indiana.edu/pub/wang.uncertainties.ps

 The 3 logics mentioned above vary in how well they address these
 issues, of course, but they are all essentially descended from NARS.
 My impression is that as a result they are strong in (2a) and (3b) at
 least, but I am not sure about the rest. (Of course, it is hard to
 evaluate NARS on most of the points in #2 since I stated them in the
 language of probability theory. And, opinions will differ on (1).)

 Anyone else have lists? Or thoughts?

 If you consider approaches with various scope and maturity, there are
 much more than these three approaches, and I'm sure most of people
 working on them will claim that they are also general purpose.
 Interested people may want to browse http://www.auai.org/ and
 http://www.elsevier.com/wps/find/journaldescription.cws_home/505787/description#description

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: 

Re: [agi] uncertain logic criteria

2008-09-18 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 4:21 AM, Kingma, D.P. [EMAIL PROTECTED] wrote:

 Small question... aren't Bbayesian network nodes just _conditionally_
 independent: so that set A is only independent from set B when
 d-separated by some set Z? So please clarify, if possible, what kind
 of independence you assume in your model.

Sorry, I made a mistake.  You're right that X and Y can be dependent
even if there is no direct link between them in a Bayesian network.

I am currently trying to develop an approximate algorithm for Bayesian
network inference.  Exact BN inference takes care of dependencies as
specified in the BN, but I suspect that an approximate algorithm may
be faster.  I have not worked out the details of this algorithm yet...
and the talk about independence was misleading.

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread YKY (Yan King Yin)
On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:

Speaking of my BPZ-logic...

 2. Good at quick-and-dirty reasoning when needed

Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
logic's speed approach that of Prolog (which is a fast inference
algorithm for binary logic).

 --a. Makes unwarranted independence assumptions

Yes, I think independence should always be assumed unless otherwise
stated -- which means there exists a Bayesian network link between X
and Y.

 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning

Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.

 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution

Not done yet.  I'm not familiar with max-ent.  Will study that later.

 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

I focus on learning 1st-order Bayesian networks.  I think we should
start with learning 1st-order Bayesian / Markov.  I will explore
mixing Markov and Bayesian when I have time...

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning

 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above

Yes, this can be done via meta-reasoning, which I'm currently working on.

 --b. Should have a repair algorithm based on that higher-order uncertainty

Once it is represented at the meta-level, you may do that.  But
higher-order uncertain reasoning is not high on my priority list...

YKY


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Ben Goertzel
Prolog is not fast, it is painfully slow for complex inferences due to using
backtracking as a control mechanism

The time-complexity issue that matters for inference engines is
inference-control ... i.e. dampening the combinatorial explosion (which
backtracking does not do)

Time-complexity issues within a single inference step can always be handled
via mathematical or code optimization, whereas optimizing inference control
is a deep, deep AI problem...

So, actually, the main criterion for the AGI-friendliness of an inference
scheme is whether it lends itself to flexible, adaptive control via

-- taking long-term, cross-problem inference history into account

-- learning appropriately from noninferential cognitive mechanisms (e.g.
attention allocation...)


-- Ben G

On Wed, Sep 17, 2008 at 3:00 PM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED]
 wrote:

 Speaking of my BPZ-logic...

  2. Good at quick-and-dirty reasoning when needed

 Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
 logic's speed approach that of Prolog (which is a fast inference
 algorithm for binary logic).

  --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

  --b. Collapses probability distributions down to the most probable
  item when necessary for fast reasoning

 Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.

  --c. Uses the maximum entropy distribution when it doesn't have time
  to calculate the true distribution

 Not done yet.  I'm not familiar with max-ent.  Will study that later.

  --d. Learns simple conditional models (like 1st-order markov models)
  for use later when full models are too complicated to quickly use

 I focus on learning 1st-order Bayesian networks.  I think we should
 start with learning 1st-order Bayesian / Markov.  I will explore
 mixing Markov and Bayesian when I have time...

  3. Capable of repairing initial conclusions based on the bad models
  through further reasoning

  --a. Should have a good way of representing the special sort of
  uncertainty that results from the methods above

 Yes, this can be done via meta-reasoning, which I'm currently working on.

  --b. Should have a repair algorithm based on that higher-order
 uncertainty

 Once it is represented at the meta-level, you may do that.  But
 higher-order uncertain reasoning is not high on my priority list...

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Pei Wang
On Wed, Sep 17, 2008 at 1:46 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hi everyone,

 Most people on this list should know about at least 3 uncertain logics
 claiming to be AGI-grade (or close):

 --Pie Wang's NARS

Yes, I heard of this guy a few times, who happens to use the same name
for his project as mine. ;-)

 Here is my list:

 1. Well-defined uncertainty semantics (either probability theory or a
 well-argued alternative)

Agree, and I'm glad that you mentioned this item first.

 2. Good at quick-and-dirty reasoning when needed
 --a. Makes unwarranted independence assumptions
 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning
 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution
 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

As you admitted in the following, the language is biased. Using
theory-neutral language, I'd say the requirement is to derive
conclusions with available knowledge and resources only, which sounds
much better than quick-and-dirty to me.

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning
 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above
 --b. Should have a repair algorithm based on that higher-order uncertainty

As soon as you don't assume there is a model, this item and the
above one become similar, which are what I called revision and
inference, respectively, in
http://www.cogsci.indiana.edu/pub/wang.uncertainties.ps

 The 3 logics mentioned above vary in how well they address these
 issues, of course, but they are all essentially descended from NARS.
 My impression is that as a result they are strong in (2a) and (3b) at
 least, but I am not sure about the rest. (Of course, it is hard to
 evaluate NARS on most of the points in #2 since I stated them in the
 language of probability theory. And, opinions will differ on (1).)

 Anyone else have lists? Or thoughts?

If you consider approaches with various scope and maturity, there are
much more than these three approaches, and I'm sure most of people
working on them will claim that they are also general purpose.
Interested people may want to browse http://www.auai.org/ and
http://www.elsevier.com/wps/find/journaldescription.cws_home/505787/description#description

Pei


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Matt Mahoney
--- On Wed, 9/17/08, Abram Demski [EMAIL PROTECTED] wrote:

 Most people on this list should know about at least 3
 uncertain logics
 claiming to be AGI-grade (or close):
 
 --Pie Wang's NARS
 --Ben Goertzel's PLN
 --YKY's recent hybrid logic proposal
 
 It seems worthwhile to stop and take a look at what
 criteria such
 logics should be judged by. So, I'm wondering: what
 features would
 people on this list like to see?

How about testing in the applications where they would actually be used, 
perhaps on a small scale. For example, how would these logics be used in a 
language translation program, where the problem is to convert a natural 
language sentence into its structured representation and convert it back in 
another language. How easy is it to populate the database with the gigabyte or 
so of common sense knowledge needed to provide the context in which natural 
language statements are interpreted? (Cyc proved it is very hard).

For a lot of the problems where we actually use structured data, a relational 
database works pretty well. However it is nice to see proposals that deal with 
inconsistencies in the database better than just reporting an error.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Kingma, D.P.
On Wed, Sep 17, 2008 at 9:00 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

Small question... aren't Bbayesian network nodes just _conditionally_
independent: so that set A is only independent from set B when
d-separated by some set Z? So please clarify, if possible, what kind
of independence you assume in your model.

Kind regards,
Durk Kingma
The Netherlands


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Abram Demski
Good point, this applies to me as well (I'll let YKY answer as it
applies to him). I should have said conditional independence rather
than just independence.

--Abram

On Wed, Sep 17, 2008 at 4:21 PM, Kingma, D.P. [EMAIL PROTECTED] wrote:
 On Wed, Sep 17, 2008 at 9:00 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:

 --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

 Small question... aren't Bbayesian network nodes just _conditionally_
 independent: so that set A is only independent from set B when
 d-separated by some set Z? So please clarify, if possible, what kind
 of independence you assume in your model.

 Kind regards,
 Durk Kingma
 The Netherlands


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Abram Demski
YKY,

Thanks for the reply. It seems important to me to be able to do more
than just the fast reasoning. When given more time, a reasoning method
should reconsider its independence assumptions, employ more
sophisticated models, et cetera.

By the way, when I say markov model I mean markov chain as opposed
to markov network-- should have been more clear. In that context,
1st-order means conditioned on 1 past item. So when I say
1st-order model, I mean something like: a model that records
conditional probabilities conditioned only on 1 thing. (So I might
know the probability of winning the election given the fact of being
male, and the probability given the fact of being over age 30, but to
calculate the probability given *both*, I'd have to assume that the
effects of each were independent rather than asking my model what the
combined influence was.) These models allow facts to be combined
fairly quickly, but are wrong in cases where there are combined
effects (such as adding sugar makes it nice, adding salt makes it
nice, but adding both makes it awful). 2nd-order means condition on
only 2 items, and so on.

Anyway, my vision is something like this: we first learn very simple
(perhaps 1st or 2nd order) models, and then we learn corrections to
those simple models. Corrections are models that concentrate only on
the things that the simple models get wrong. The system could learn a
series of better and better models, each consisting of corrections on
the previous. Thus the system reasons progressively, first by the
low-order conditional model, then by invoking progressive corrections
that revise conclusions.

So, what I really would like would be a formal account of how this
should be done; exactly what kind of uncertainty results from using
the simple models, how is it best represented, and how is it best
corrected? Conditional independence assumptions seem like the most
relevant type of inaccuracy; collapsing probabilities down to boolean
truth values (or collapsing higher-order probabilities down to
lower-order probabilities), and employing max-entropy assumptions, are
runner-ups.

--Abram

On Wed, Sep 17, 2008 at 3:00 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski [EMAIL PROTECTED] wrote:

 Speaking of my BPZ-logic...

 2. Good at quick-and-dirty reasoning when needed

 Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
 logic's speed approach that of Prolog (which is a fast inference
 algorithm for binary logic).

 --a. Makes unwarranted independence assumptions

 Yes, I think independence should always be assumed unless otherwise
 stated -- which means there exists a Bayesian network link between X
 and Y.

 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning

 Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.

 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution

 Not done yet.  I'm not familiar with max-ent.  Will study that later.

 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

 I focus on learning 1st-order Bayesian networks.  I think we should
 start with learning 1st-order Bayesian / Markov.  I will explore
 mixing Markov and Bayesian when I have time...

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning

 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above

 Yes, this can be done via meta-reasoning, which I'm currently working on.

 --b. Should have a repair algorithm based on that higher-order uncertainty

 Once it is represented at the meta-level, you may do that.  But
 higher-order uncertain reasoning is not high on my priority list...

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] uncertain logic criteria

2008-09-17 Thread Abram Demski
Pei,

You are right, that does sound better than quick-and-dirty. And more
relevant, because my primary interest here is to get a handle on what
normative epistemology should tell us to conclude if we do not have
time to calculate the full set of consequences to (uncertain) facts.

It is unfortunate that I had to use biased language, but probability
is of course what I am familiar with... I suppose, though, that most
of the terms could be roughly translated into NARS? Especially
independence, and I should hope conditional independence as well.
Collapsing probabilities can be restated as generally collapsing
uncertainty.

Thanks for the links. The reason for singling out these three, of
course, is that they have already been discussed on this list. If
anybody wants to point out any others in particular, that would be
great.

--Abram

On Wed, Sep 17, 2008 at 3:54 PM, Pei Wang [EMAIL PROTECTED] wrote:
 On Wed, Sep 17, 2008 at 1:46 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Hi everyone,

 Most people on this list should know about at least 3 uncertain logics
 claiming to be AGI-grade (or close):

 --Pie Wang's NARS

 Yes, I heard of this guy a few times, who happens to use the same name
 for his project as mine. ;-)

 Here is my list:

 1. Well-defined uncertainty semantics (either probability theory or a
 well-argued alternative)

 Agree, and I'm glad that you mentioned this item first.

 2. Good at quick-and-dirty reasoning when needed
 --a. Makes unwarranted independence assumptions
 --b. Collapses probability distributions down to the most probable
 item when necessary for fast reasoning
 --c. Uses the maximum entropy distribution when it doesn't have time
 to calculate the true distribution
 --d. Learns simple conditional models (like 1st-order markov models)
 for use later when full models are too complicated to quickly use

 As you admitted in the following, the language is biased. Using
 theory-neutral language, I'd say the requirement is to derive
 conclusions with available knowledge and resources only, which sounds
 much better than quick-and-dirty to me.

 3. Capable of repairing initial conclusions based on the bad models
 through further reasoning
 --a. Should have a good way of representing the special sort of
 uncertainty that results from the methods above
 --b. Should have a repair algorithm based on that higher-order uncertainty

 As soon as you don't assume there is a model, this item and the
 above one become similar, which are what I called revision and
 inference, respectively, in
 http://www.cogsci.indiana.edu/pub/wang.uncertainties.ps

 The 3 logics mentioned above vary in how well they address these
 issues, of course, but they are all essentially descended from NARS.
 My impression is that as a result they are strong in (2a) and (3b) at
 least, but I am not sure about the rest. (Of course, it is hard to
 evaluate NARS on most of the points in #2 since I stated them in the
 language of probability theory. And, opinions will differ on (1).)

 Anyone else have lists? Or thoughts?

 If you consider approaches with various scope and maturity, there are
 much more than these three approaches, and I'm sure most of people
 working on them will claim that they are also general purpose.
 Interested people may want to browse http://www.auai.org/ and
 http://www.elsevier.com/wps/find/journaldescription.cws_home/505787/description#description

 Pei


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com