On Sat, May 19, 2018 at 1:00 PM, Alexey Potapov <pota...@aideus.com> wrote:

>
>
> Well... traditional probabilistic programming is a logical probabilistic
> programming. It's definitely not about lambda-calculus.
>

I don't know what to do with this statement. There is a famous theorem, the
church-turing theorem, dating to the 1930's, that states that anything
turing-computable is equivalent to lambda calculus. There have been many
extensions, refinements, generalizations and clarifications of that
theorem, since then.

If you have a probabilistic programming language working on a modern-day
digital computer, then its lambda-calculus. If you have a theoretical
algebra working on infinite-precision topological spaces, that's something
else. The quantum-computing machines are often understood as
infinite-precision topological vector-space machines  (where the space is
complex-projective, and the operators are unitary).

Topological computing is .. interesting, but I never got the sense, from
quick skims of the literature, that this is what was being explored.

> I think much of what neural nets and deep learning do also fits into this
> general framework; I want to write a paper on > this, but have not had the
> time yet.
>

> You can also map a functional programming (with algebraic types, pattern
matching, etc.)
> to neural networks. One my student has written a nice diploma work on
this topic.
> So, it's cool, but this doen't give us much per se...

Well, one of the problems in the unsupervised natural-language-learning
project is to factor large tensor products into approximately diagonal
components.  The factorization can be done slowly, by walking over all
elements, comparing them sorting them.  I claim that the factorization can
also be done quickly, using NN algorithms, but discussions about this have
always gotten stuck in various misunderstandings. Thus, having this
explicitly written down is important.

In a very abstract, hand-wavey fashion: there is this general concept of
"integrated information". The unsupervised natural-language-learning
project is all about finding the those parts which are least-integrated,
and performing explicit cuts there. What remains are the highly-integrated
parts, grouped up into classes: nouns, verbs, morphemes, syntactic
relations, semantic similarity, etc.  I guess you could say that its
"discrimination", but the field is not some 2D pixel field, but instead
this certain abstract graph.

-- Linas

-- 
cassette tapes - analog TV - film cameras - you

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA37Vq5-wSTaE5XmBG35qBDYHzAMKPqSjvLL8mvyNJ5NRVg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to