Lukasz,
Thanks!
To me, your "logical semantics" and "linguistic semantics" correspond
to "meaning of concepts" and "meaning of words", respectively, and the
latter is a subset of the former, as far as an individual is
concerned.
Some random comments on the Multinet material:
*. Principal requir
Lukasz, I am very pleased with my implementation of the few Double R Grammar
rules required to incrementally parse "the book is on the table", which is an
example sentence from Jerry Ball's paper. Dr. Ball is a proponent of
cognitively plausible NLP architectures.
-Steve
Stephen L. Reed
Art
Jiri Jelinek wrote:
Richard,
http://susaro.com/
1) the "safety" stuff.. - For a while, AGI will IMO be
abuse-vulnerable = as safe/unsafe as those who control it.
2) "[AI] does not devalue us".. - agreed.. My view: Problems for AIs,
work for robots, feelings for us. Qualia - that's where the
Richard,
> http://susaro.com/
1) the "safety" stuff.. - For a while, AGI will IMO be
abuse-vulnerable = as safe/unsafe as those who control it.
2) "[AI] does not devalue us".. - agreed.. My view: Problems for AIs,
work for robots, feelings for us. Qualia - that's where the value is.
3) AI thin
Steve,
I'm just on the 7th page of the Double R Grammar paper so I'm rushing
ideas here, but it is interesting to see how Multinet, while taking
roots in Conceptual Dependency Theory / Case Grammar, and taking the
concepts it talks about as mental realities, lands quite close to the
philosophy of
>> I want to return to what seems to me the high-school-naive idea of how an
>> AGI's or any body of knowledge can and/or does grow - i.e. linearly,
>> mathematically and logically.
I would argue that the idea that knowledge grows linearly is far worse than
naive. Knowledge is all about collap
Just noticed that last month, a computer program beat a professional Go player
(at a 9x9 game) (one game in 4). First time ever in a non-blitz setting.
http://www.earthtimes.org/articles/show/latest-advance-in-artificial-intelligence,345152.shtml
http://www.computer-go.info/tc/
-
On Wed, Apr 9, 2008 at 6:03 AM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> Thanks for the compliment Lukasz. I am reading your slides and here are my
> comments:
>
> (1) I had seven years experience with the Cyc project. Would you agree
> that Cyc aspires to be a KRS as you define it?
Well, a
I want to return to what seems to me the high-school-naive idea of how an AGI's
or any body of knowledge can and/or does grow - i.e. linearly, mathematically
and logically.
Correct me, but I haven't seen any awareness in AI of the huge difficulties
that result from the problem of : how do you t