>
> Hmm, well when I think about the algorithms involved, I do not see why
> the Pattern Miner and Pattern Matcher would be unable to search for
> patterns involving Values... I think they could....  It's true the
> code doesn't do this now though...
>

Yes, it should be quite possible algorithmically. And that's exactly why we
discuss this - because we want to use PM algorithms on Values. However, to
implement this, some architectural and organizational decisions should be
made (should we generalize existing values to tensors or introduce a
separate type of values; should we overload TimesLink, etc. to work both
with NumberNodes and Values, or introduce new types of Links, or introduce
introduce special links that "atomize" values, etc.; should this be done in
a separate repo with keeping core PM algorithms unchanged, or should the
core PM be modified, and by whom, etc.). We have few guys who can work on
this, but we need to know the preferable way.


>
> It is true that Values are not indexed globally.   But it seems to me
> that the search algorithms inside the PMs do not need such indexes...
>

seems so


>
> Now coordinate values of bounding boxes ... If we are talking about
> something like the bounding box of Ben's face during a conversation,
> which changes frequently, this would be appropriately stored in the
> Atomspace using a StateLink,
>
> https://wiki.opencog.org/w/StateLink
>
>
We considered StateLink as a way to feed OpenCog with observations within
the reinforcement learning direction. But the current question remains the
same: should we use NumberNodes or Values?..
Also, DNNs are trained on (mini-)batches. It is not too natural from an
autonomous agent perspective, but efficient.



>
> In any case I am confused about how these technical OpenCog plumbing
> issues related to the general issues you raise...
>

Difference between Atoms and Values is relevant, but this relevance will be
much better seen when we go from just Atoms vs Values to the inference
processes over them  (declarative logic represents computations inversely;
and back inversion to direct computations performed by processors is done
by the inference engine; that's why logic poorly deals with number
crunching, i.e. Values manipulation, while it is good for reasoning over
Atoms), which I have not yet discussed on a technical level. However, I
mentioned this problem in my long message on example of PM application to
VQA. Maybe we should not discuss all these question simultaneously, but I
can try to elaborate on this if you wish.



>
> One question is: Is probabilistic logic an appropriate method for the
> core of an AGI system, given that this AGI system must proceed largely
> on observation-based semantics ...
>
> I think the answer is YES
>

I think it is necessary but not sufficient


>
> Another question is: Is the current OpenCog infrastructure fully ready
> to support scalable probabilistic logic on real-time observation
> data...
>
> I think the answer is NOT QUITE
>

True.


> Similarly, we could ask
>
> One question is: Is probabilistic programming an appropriate method for the
> core of an AGI system, given that this AGI system must proceed largely
> on observation-based semantics ...
>
> I think the answer is YES
>

Well, as I have already said once, I don't think that (existing)
probabilistic programming really solves anything. It is a good way to
uniformly put problems. So, I wouldn't say it's an appropriate *method*,
but it's an appropriate (but again not sufficient) way of framing the AGI
problem.


>
> Another question is: Is any currently available probabilistic
> programming infrastructure fully ready
> to support scalable probabilistic programming on real-time observation
> data...
>
> I think the answer is NO... or maybe (??) NOT QUITE
>

Definitely.


> Regarding the comparison btw probabilistic logic and probabilistic
> programming, I would note that
>
> -- dealing with quantifiers and their binding functions in
> probabilistic logic is a pain in the ass
>
> -- dealing with execution traces in probabilistic programming is a
> pain in the ass
>
> [But ofc, to do probabilistic program learning in any AGI-ish sense,
> you need to be modeling execution traces
> and all the variable state changes and interrelationships in there etc. ]
>
> So there is copious mess about variables, of different sorts, in both
> paradigms..
>

Sure.


When we extend these methods to 2nd and 3rd order
> probability distros, we run into the
> issue that doing probabilistic program learning via MC sampling or
> anything similar to that becomes
> extremely slow....   One then wants to do inference to bypass the need
> for sampling.   But what kind
> of inference?  Perhaps PLN type abductive and inductive inference?
> In this case one needs the probabilistic
> logic in order to actually do learning over probabilistic programs
> without incurring unrealistic overhead...
>
>
Exactly. Probabilistic logic is a way to make inference over probabilistic
programs much more efficient. I have specific examples for this in mind.

Overall, my feeling is that probabilistic programming will be better
> for procedural knowledge, and probabilistic
> logic will be better for declarative knowledge
>

Hmm... not precisely. In the context of probabilistic inference, purely
procedural knowledge is the result of specialization of a general inference
procedure w.r.t. specific generative model, that is, discriminative models
are purely procedural. With the use of generative models, you can infer
(and should infer with the use of search like in probabilistic logic) truth
values for any conditional expression, but these models don't say how
exactly to calculate these values, so they don't represent procedural
knowledge in this sense, and have some features of declarative knowledge. I
couldn't call generative models a declarative knowledge either. So, I'm
slightly confused how to classify them...

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CABpRrhy83FCipyfWVH1WbhO0Xhe%3DYEeMA1hOYGL4-bA3wPT9YQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to