Aaron,

On Mon, Apr 1, 2013 at 1:13 PM, Aaron Hosford <[email protected]> wrote:

> The BIG question is: What good are a lot of 95% "facts". You can't rely on
>> ANYTHING, and as soon as you start putting those "facts" together in
>> combination, the accuracy falls WAY below 95%. The more "facts" that are
>> strung together, the less accurate the results. For example, the likelihood
>> of just 10 x 95% "facts" all being correct is only 60%., and the likelihood
>> of getting 20 correct is only 36%. Hence, any "AGI" applying limitless
>> computing capability to make sense of Wikipedia is only going to generate a
>> lot of gibberish, possibly including some jewels, but without any capacity
>> to separate the jewels from the broken glass without access to the real
>> world.
>
>
> This only makes sense if the probabilities of the list of "facts" cannot
> be leveraged for consistency-based error correction. If I can take two 95%
> likely facts and use them to identify a third which is inconsistent with
> them, I can often correct that third one based on how it conflicts with the
> first two. It doesn't make for guaranteed 100% accuracy, but it can
> significantly improve the error rate. It's a similar principle to boosting
> in machine learning.
> http://en.wikipedia.org/wiki/Boosting_(machine_learning) This is why I
> think it's important to integrate knowledge.
>

Yes, I describe this in my patent application. However, this is ONLY
interesting for applications that are error-tolerant. Most proposed AGI
applications involve stringing many facts together so the error rate will
be increased, yet they are NOT particularly error-tolerant.

Steve



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to