People have been pursuing structured knowledge representation since the
1950s but it's a dead end. The Cyc project was the biggest failure because
it lacked a natural language interface and a learning algorithm. More
recent approaches like YKY's logic systems, Ben Goertzel's
Webmind/Novamente/OpenCog/Hyperon and Pei Wang's NARS use hybrid systems of
probabilistic logic but still require expensive hand coding of knowledge,
which didn't seem to be happening before they went quiet on this list years
ago.

The Hutter prize is not about representing knowledge. It's about
intelligence as defined by Turing. It would be nice if we could look at
giant matrices to understand what real world knowledge it represents, but
we can't because it violates Wolpert's theorem. The better a system can
predict your actions, the worse you are at predicting its actions. You can
have intelligence or you can have control, but not both.

Control requires predictability. Prediction tests understanding. Prediction
measures intelligence. Compression measures prediction.

-- Matt Mahoney, [email protected]

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Mc1c54d08cd0b93ec05a41104
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to