Prolog is not fast, it is painfully slow for complex inferences due to using
backtracking as a control mechanism

The time-complexity issue that matters for inference engines is
inference-control ... i.e. dampening the combinatorial explosion (which
backtracking does not do)

Time-complexity issues within a single inference step can always be handled
via mathematical or code optimization, whereas optimizing inference control
is a deep, deep AI problem...

So, actually, the main criterion for the AGI-friendliness of an inference
scheme is whether it lends itself to flexible, adaptive control via

-- taking long-term, cross-problem inference history into account

-- learning appropriately from noninferential cognitive mechanisms (e.g.
attention allocation...)


-- Ben G

On Wed, Sep 17, 2008 at 3:00 PM, YKY (Yan King Yin) <
[EMAIL PROTECTED]> wrote:

> On Thu, Sep 18, 2008 at 1:46 AM, Abram Demski <[EMAIL PROTECTED]>
> wrote:
>
> Speaking of my BPZ-logic...
>
> > 2. Good at quick-and-dirty reasoning when needed
>
> Right now I'm focusing on quick-and-dirty *only*.  I wish to make the
> logic's speed approach that of Prolog (which is a fast inference
> algorithm for binary logic).
>
> > --a. Makes unwarranted independence assumptions
>
> Yes, I think independence should always be assumed "unless otherwise
> stated" -- which means there exists a Bayesian network link between X
> and Y.
>
> > --b. Collapses probability distributions down to the most probable
> > item when necessary for fast reasoning
>
> Do you mean collapsing to binary values?  Yes, that is done in BPZ-logic.
>
> > --c. Uses the maximum entropy distribution when it doesn't have time
> > to calculate the true distribution
>
> Not done yet.  I'm not familiar with max-ent.  Will study that later.
>
> > --d. Learns simple conditional models (like 1st-order markov models)
> > for use later when full models are too complicated to quickly use
>
> I focus on learning 1st-order Bayesian networks.  I think we should
> start with learning 1st-order Bayesian / Markov.  I will explore
> mixing Markov and Bayesian when I have time...
>
> > 3. Capable of "repairing" initial conclusions based on the bad models
> > through further reasoning
>
> > --a. Should have a good way of representing the special sort of
> > uncertainty that results from the methods above
>
> Yes, this can be done via meta-reasoning, which I'm currently working on.
>
> > --b. Should have a "repair" algorithm based on that higher-order
> uncertainty
>
> Once it is represented at the meta-level, you may do that.  But
> higher-order uncertain reasoning is not high on my priority list...
>
> YKY
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to