> One question is: Is probabilistic logic an appropriate method for the
> core of an AGI system, given that this AGI system must proceed largely
> on observation-based semantics ...
>
> I think the answer is YES
>
> Another question is: Is the current OpenCog infrastructure fully ready
> to support scalable probabilistic logic on real-time observation
> data...
>
> I think the answer is NOT QUITE

Similarly, we could ask

One question is: Is probabilistic programming an appropriate method for the
core of an AGI system, given that this AGI system must proceed largely
on observation-based semantics ...

I think the answer is YES

Another question is: Is any currently available probabilistic
programming infrastructure fully ready
to support scalable probabilistic programming on real-time observation
data...

I think the answer is NO... or maybe (??) NOT QUITE

...

Regarding the comparison btw probabilistic logic and probabilistic
programming, I would note that

-- dealing with quantifiers and their binding functions in
probabilistic logic is a pain in the ass

-- dealing with execution traces in probabilistic programming is a
pain in the ass

[But ofc, to do probabilistic program learning in any AGI-ish sense,
you need to be modeling execution traces
and all the variable state changes and interrelationships in there etc. ]

So there is copious mess about variables, of different sorts, in both
paradigms..

...

Semi-relatedly, it seems to me that if one takes the connector
approach to proofs, then the set of connectors
comprising a proof can be viewed as a set of dependent types -- and a
proof then can be translated
to a program via following the prescription embodied in the Agda
language, but assuming Agda has
at its disposal a library function that carries out unification ...

First order unification in Agda seems OK

https://github.com/wenkokke/FirstOrderUnificationInAgda

higher order also seems to work

https://github.com/Saizan/miller

but may have bigger scalability issues...

So the mapping from connector proofs to procedures becomes pretty
concrete in this sense

...

The paper I linked in my previous email shows how to (for discrete
pdfs over finite domains) map
probabilistic logic into simple probabilistic programs....   However
it only deals with first-order probability
distros....  When we extend these methods to 2nd and 3rd order
probability distros, we run into the
issue that doing probabilistic program learning via MC sampling or
anything similar to that becomes
extremely slow....   One then wants to do inference to bypass the need
for sampling.   But what kind
of inference?  Perhaps PLN type abductive and inductive inference?
In this case one needs the probabilistic
logic in order to actually do learning over probabilistic programs
without incurring unrealistic overhead...

...

Overall, my feeling is that probabilistic programming will be better
for procedural knowledge, and probabilistic
logic will be better for declarative knowledge.   Converting between
the two will be valuable also.   Exactly
where each formulation will be most useful, we will need to determine
via experiment...

-- Ben

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBcucLAJ5b4M3QDROE4dC7m2_M0-90RncvgMuyCW5Ajryg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to