Matt Mahoney wrote:
--- Mike Tintner <[EMAIL PROTECTED]> wrote:

My point was how do you test the *truth* of items of knowledge. Google tests
the *popularity* of items. Not the same thing at all. And it won't work.

It does work because the truth is popular.  Look at prediction markets.  Look
at Wikipedia.  It is well known that groups make better decisions as a whole
than the individuals in those groups (e.g. democracies vs. dictatorships). Combining knowledge from independent sources and testing their reliability is
a well known machine learning technique which I use in the PAQ data
compression series.  I understand the majority can sometimes be wrong, but the
truth eventually comes out in a marketplace that rewards truth.

Perhaps you have not read my proposal at http://www.mattmahoney.net/agi.html
or don't understand it.

Some of us have read it, and it has nothing whatsoever to do with Artificial Intelligence. It is a labor-intensive search engine, nothing more.

I have no idea why you would call it an AI or an AGI. It is not autonomous, contains no thinking mechanisms, nothing. Even as a "alabor intensive search engine" there is no guarantee it would work, because the conflict resolution issues are all complexity-governed.

I am astonished that you would so blatantly call it something that it is not.



Richard Loosemore

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to