- Forwarded message from Giuseppe Ciaccio [EMAIL PROTECTED] -
From: Giuseppe Ciaccio [EMAIL PROTECTED]
Date: Thu, 3 Aug 2006 15:26:15 +0200 (CEST)
To: beowulf@beowulf.org
Subject: [Beowulf] new release of GAMMA and MPI/GAMMA
Hello,
this is to inform you that a new release of the Genoa
I tend to agree with Richard's view and I may build an AGI with symbolic, non-numericalinference.
1. As Russell pointed out, if the priors are not knownor are in extremely low precision,Bayes ruleis not very applicable. Number crunching with priors of 1-2 bits precision is garbage in, garbage
Richard,
Thanks for taking the time to explain your position. I actually agree
with most what you wrote, though I don't think it is inconsistent with
my point, that is, beliefs do need numerical truth values.
Let me explain briefly (I have to leave soon). In an AGI system (as
least in mine), a
YKY:
(1) Your worry about the Bayesian approach is reasonable, but it is
not the only possible way to use numerical truth value --- even Ben
will agree with me here. ;-)
(2) Accuracy is not a big problem, but if you do some experiments on
incremental learning, you will soon see that 1-2 digits
Hi,
It's easy enough to write out algebraic rules for manipulating fuzzy
qualifiers like very likely, may, and so forth. It may well be
that the human mind uses abstract, intuitive, algebraic-like rules for
manipulating these, instead of or in parallel to more quantitative
methods...
However,
Pei
I think we are very much in agreement, though perhaps our main
difference is in the emphasis, and the exact role played by the
numerical truth value I certainly want to emphasize that I think
this *is* calculated sometimes. (And I agree that it is not really
equivalent to a
Our latest news flash: http://adaptiveai.com/news/
News Flash
Our project is progressing well, and we are once again looking to
expand our team.
This is an opportunity for a select few individuals to become
members of our core team, and to significantly contribute to the creation
Let me reply to everyone here...
Pei: You said non-numericheuristics (such as endorsement theory) may run into problems. Yes, but I believe those problems can be solved using further heuristics (eg see wikipedia article on Nixon diamond). If you resolve the Nixon diamond by referring to
Ben: I think the problem of contextuality may be solved like this:
Examples:
John and Mary have many kids. (like, 10)
This Chinese restuarant has many customers. (like 100s)
Many people in Africa have AIDs. (like 10s of millions)
so I propose a rule like this:
IF
n is significantly the
On 8/4/06, Yan King Yin [EMAIL PROTECTED] wrote:
Now, figuring out all the heuristical NTV /
symbolic qualifier'supdate rules, such thatan AGI will
always be internally consistent, and provably increasing
in accuracy, is a very non-trivial task.
Well indeed it is of course impossible, no matter
No. IMO, a simple rule like this does not correctly capture human
usage of qualifiers across contexts, and is not adequate for AI
purposes
Perhaps this rule is a decent high-level approximation, but AGI
requires better...
-- Ben
On 8/4/06, Yan King Yin [EMAIL PROTECTED] wrote:
Ben:
Hi,
Google has announced the release of a trillion-word training corpus
including one billion five-word sequences that appear at least 40 times
in a their database of web pages.
More at
http://googleresearch.blogspot.com/2006/08/all-our-n-gram-are-belong-to-you.html
The 6 DVD set will be
On 8/5/06, Russell Wallace [EMAIL PROTECTED] wrote:
Now, figuring out all the heuristical NTV / symbolic qualifier'supdate rules, such thatan AGI will always be internally consistent, and provably increasing in accuracy, is a very non-trivial task.
Well indeed it is of course impossible, no
13 matches
Mail list logo