Excellent.

Here's the next step as I see it:

Abductive logic programming, by reasoning backwards from observed facts
(phenomena) to "explanations" is, in effect, engaged in what ML engineers
call "training": The "generation" of a world model from the data.  In the
forward reasoning phase aka deduction aka "inference", the previously
generated world model (explanations) is executed to, again, "generate"
expectations according to the world model's probability distribution.  When
I say "executed" I am, of course, speaking of executing the algorithmic
information explanation of the "training data".

Matt once provided me this concise way of describing these kinds of logic:

Deduction: P. If P then Q. Therefore Q.
Abduction: Q. If P then Q. Therefore P.
Induction: P. Q. Therefore if P then Q.

The "If" statements -- the conditional probabilities -- appear in all three
modes.  They represent the world model.  Note that in the case of
abduction, a "prior" is imputed.  So it is actually essential to induction.

On Wed, Aug 21, 2024 at 3:19 AM YKY (Yan King Yin, 甄景贤) <
generic.intellige...@gmail.com> wrote:

> On Tue, Aug 13, 2024 at 10:21 PM James Bowery <jabow...@gmail.com> wrote:
>
>> Not being competent to judge the value of your intriguing categorical
>> approach, I'd like to see how it relates to:
>>
>> * abductive logic programming
>>
>
> Yes, abductive logic is a good point.
> Abduction means "finding explanations for..."
> For example, a woman opens the bedroom door, sees a man in bed with
> another woman,
> and then all parties start screaming at each other at a high pitch.
> Explanation:  "wife discovers husband's affair", "she's jealous and
> furious", etc.
> In classical logic-based AI, these can be learned by logic rules,
> and applying the rules backwards (from conclusions to premises).
> In the modern paradigm of LLMs, all these inferences can be achieved in
> one fell swoop:
>
> [image: auto-encoder.png]
>
> In our example, the bedroom scene (raw data) appears at the input.
> Then a high-level explanation emerges at the latent layers (ie. yellow
> strip
> but also distributed among other layers).
> The auto-encoder architecture (also called predictive coding, and a bunch
> of names...)
> beautifully captures all the operations of a logic-AI system:  rules
> matching, rules application,
> pruning of conclusions according to interestingness, etc.
> All these are mingled together in the "black box" of a deep neural network.
> My big question is whether we can _decompose_ the above process into
> smaller parts,
> ie. to give it some fine structure, so the whole process would be
> accelerated.
> But this is hard because the current Transformer already has a fixed
> structure
> which is still somewhat mysterious...
>
> YKY
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T04cca5b54df55d05-M0159c437978edb6fbb39e6fb>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T04cca5b54df55d05-Mb6d81bc790f64f3b48f5f4df
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to