It would be Interesting to compare PLN with NARS on these simple credit
assignment ish examples....
---------- Forwarded message ----------
From: "Pei Wang" <mail.peiw...@gmail.com>
Date: Sep 5, 2016 2:59 AM
Subject: [open-nars] credit assignment
To: "open-nars" <open-n...@googlegroups.com>
Cc:

In AI, "credit assignment" is the problem of distributing the overall
credit (or blame) to the involved steps. Back-prop in ANN is for a similar
problem -- to adjust the weights on a path to get a desired overall result.
I'm trying to use a simple example to should how it is handled in NARS.

Here is the situation: from <a --> b>, <b --> c>, and <c --> d>, the system
derives <a --> d> (as well as some other conclusions). If now the system is
informed that <a --> d> is false, it will surely change its belief on this
statement. Now the problem is: how much it should change its beliefs on <a
--> b>, <b --> c>, and <c --> d>, and in what process?

In the attached text file, I worked out the example step by step, using the
default truth-value for the inputs. In the attached spreadsheet, the whole
process is coded, so you can change the input values (in green) to see how
the other values are changed accordingly. In particular, you should try (1)
giving different confidence values to <a --> b>, <b --> c>, and <c --> d>,
and (2) giving confirming observation on <a --> d>.

In the spreadsheet, there are two places where a conclusion can be derived
in two different paths and the truth-values may be different. I have both
results listed, and in the system the choice rule will pick the one that
has a higher confidence.

This example can be extended into more than three steps. One interesting
result is that the beliefs at the ends (<a --> b> and <c --> d>) are
adjusted more than the ones in the middle (<b --> c>), which I think can be
justified.

This result can be used in comparing NARS with other models, such as deep
learning or non-classic logic systems (non-monotonic, para-consistent,
probabilistic, etc.).

Comments, issues, and additions?

Regards,

Pei

-- 
You received this message because you are subscribed to the Google Groups
"open-nars" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to open-nars+unsubscr...@googlegroups.com.
To post to this group, send email to open-n...@googlegroups.com.
Visit this group at https://groups.google.com/group/open-nars.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBe2ghAOnMXAqO7pckGFVpf5FEh-b%2B5rcPqPEgAN0QgasQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.
// 1: input
<a --> b>. {1}

// 2: input
<b --> c>. {2}

// 3: input
<c --> d>. {3}

// 4: from 1+2
<a --> c>. %1.00;0.81% {1,2}

// 5: from 2+3
<b --> d>. %1.00;0.81% {2,3}

// 6: from 3+4
<a --> d>. %1.00;0.73% {1,2,3}

// 7: from 1+5
<a --> d>. %1.00;0.73% {1,2,3}

// 8: input 
<a --> d>. %0% {4}

// 9: from 6+8 or 7+8
<a --> d>. %0.23;0.92% {1,2,3,4}

// 10: from 1+8
<b --> d>. %0.00;0.45% {1,4}

// 11: from 5+10
<b --> d>. %0.84;0.84% {1,2,3,4}

// 12: from 3+8
<a --> c>. %0.00;0.45% {3,4}

// 13: from 4+12
<a --> c>. %0.84;0.84% {1,2,3,4}

// 14: from 4+8
<c --> d>. %0.00;0.42% {1,2,4}

// 15: from 3+14
<c --> d>. %0.92;0.91% {1,2,3,4}

// 16: from 5+8
<a --> b>. %0.00;0.42% {1,2,4}

// 17: from 1+16
<a --> b>. %0.92;0.91% {1,2,3,4}

// 18: from 3+10
<b --> c>. %0.00;0.29% {1,3,4}

// 19: from 2+18
<b --> c>. %0.96;0.904% {1,2,3,4}

// 20: from 1+12
<b --> c>. %0.00;0.29% {1,3,4}

// 21: from 2+20
<b --> c>. %0.96;0.904% {1,2,3,4}

Attachment: revision-example.xlsx
Description: MS-Excel 2007 spreadsheet

Reply via email to