It strikes me you are just producing words to obfuscate the issue Matt. Some insights might apply, Wolpert's Theorem on... basically it seems to me to be about non-abstraction. Or alternate, contradictory abstraction. My theme for years. Interpreted by me as a power of chaos. But you insist on interpreting it in a purely negative way. You can only interpret it to mean that we have to keep doing whatever we're doing now.
Control might be impossible, yes. But that is not creativity to you. It just means we have to keep doing what we're doing now. It seems to me that the true goals of the Hutter Prize are being re-evaluated to be whatever has turned out to work elsewhere while it was working on something else. Most important is where the goals of a current research agenda are driving us to go next. And there again the Hutter Prize seems to be failing. Because inspired by the Hutter Prize, it seems you can only think of continuing to do what is being done now. Why do anything else? You say to Dorian, current language models are correct right now... (correct at doing whatever you've decided the Hutter Prize was always about doing.) Where is your forward research agenda amid all this negativity? On Thu, Sep 4, 2025 at 11:40 PM Matt Mahoney <[email protected]> wrote: > People have been pursuing structured knowledge representation since the > 1950s but it's a dead end. The Cyc project was the biggest failure because > it lacked a natural language interface and a learning algorithm. More > recent approaches like YKY's logic systems, Ben Goertzel's > Webmind/Novamente/OpenCog/Hyperon and Pei Wang's NARS use hybrid systems of > probabilistic logic but still require expensive hand coding of knowledge, > which didn't seem to be happening before they went quiet on this list years > ago. > > The Hutter prize is not about representing knowledge. It's about > intelligence as defined by Turing. It would be nice if we could look at > giant matrices to understand what real world knowledge it represents, but > we can't because it violates Wolpert's theorem. The better a system can > predict your actions, the worse you are at predicting its actions. You can > have intelligence or you can have control, but not both. > > Control requires predictability. Prediction tests understanding. > Prediction measures intelligence. Compression measures prediction. > > -- Matt Mahoney, [email protected] > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-Mc1c54d08cd0b93ec05a41104> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M6eebbd8d472bfe1f56a19a42 Delivery options: https://agi.topicbox.com/groups/agi/subscription
