YKY,
When the mental state length/ density is given by a balance of "shades"
spanning the truth range of each proposition, it's called free will.
C
On 20.03.2017 07:45, Ben Goertzel wrote:
YKY,
As we use it in OpenCog, our weighted/labeled hypergraph is not really
best thought of as equivalent to a set of propositions... It's more
flexible than that -- some parts of it are more like predicate
calculus, some are more like probabilistic grammars, etc.
As for simplicial complexes, the complexity (sorry ;p) of the mapping
from hypergraphs to simplicial complexes comes when you move from
topology to geometry. Topologically the mapping is clear.
Geometrically it's not clear to me at the moment, because the area of
the face of a simplicial complex (as one would normally compute it)
seems to have no obvious useful interpretation in terms of the
semantics of the hyper-edge that is the boundary of the face....
Unless one does something new/strange, which I haven't figured out yet
(but bear in mind, this is just some amusing background-thinking I'm
doing, not really needed for the main stream of OpenCog development...
I just like to keep the creative thinking going...)
ben
On Sun, Mar 19, 2017 at 11:56 PM, YKY (Yan King Yin, ηζ―θ΄€)
<[email protected]
<mailto:[email protected]>> wrote:
Thanks for telling me that hypergraphs are simplicial complexes...
I reckon hypergraphs can be broken down as a list of subsets (from
the powerset of nodes). This could also be viewed as a list of
propositions, 1 proposition = 1 hyper-edge.
So, the hypergraph representation is pretty much equivalent to a
set of propositions.
In my new theory I'm still using the set-of-propositions as
knowledge representation, albeit the propositions are mapped to
vector space, such that they can be acted on by a deep neural net.
I'm wondering if the hypergraph β
simplicial complex idea could
lead to a drastically different kind of representation structure,
unlike the set-of-proposition ones?
2) Even if you have probability distributions over the
hypergraphs, that doesn't give you much leverage. The bottleneck
of AGI is in the learning algorithm... I think we should focus on
how to make /that/ faster π
YKY
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/19237892-5029d625>
| Modify <https://www.listbox.com/member/?&> Your Subscription
[Powered by Listbox] <http://www.listbox.com>
--
Ben Goertzel, PhD
http://goertzel.org
βOur first mothers and fathers β¦ were endowed with intelligence; they
saw and instantly they could see far β¦ they succeeded in knowing all
that there is in the world. When they looked, instantly they saw all
around them, and they contemplated in turn the arch of heaven and the
round face of the earth. β¦ Great was their wisdom β¦. They were able to
know all....
But the Creator and the Maker did not hear this with pleasure. β¦ βAre
they not by nature simple creatures of our making? Must they also be
gods? β¦ What if they do not reproduce and multiply?β
Then the Heart of Heaven blew mist into their eyes, which clouded
their sight as when a mirror is breathed upon. Their eyes were covered
and they could see only what was close, only that was clear to them.β
β Popol Vuh (holy book of the ancient Mayas)
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/27598818-52cc4abc> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com