The goals of the Hutter prize are what I wrote mostly in 2006.
https://mattmahoney.net/dc/rationale.html

I am retired now but I have been toying with some ideas for a new Hutter
prize submission. Since I'm on the judging committee, I am not eligible for
prize money. But a well documented open source program could still be the
basis for future submissions to speed research. Current submissions are
based on XWRT dictionary tokenization and PAQ context modeling. I have some
ideas for memory efficient context modeling that I want to test. So far I
have just been experimenting with decoding XML, HTML, and Wiki markup and
small dictionary encoding using byte pair encoding. The current leader
sorts the articles by topic, and so far I haven't been able to improve on
it.

I have been following the unfriendly / unaligned AI debates on SL4, the
Singularity list, and LessWrong for about the last 25 years. I disagree
with the premise that AI is a goal directed optimization process that will
rapidly self improve to superhuman intelligence and kill us all because we
got the goals wrong.

AI is a model of human behavior without goals. I think the most immediate
threat is that AI kills us by giving us everything we want. But I admit I
only came to that conclusion recently after LLMs started passing the Turing
test and creating a world where you don't know (or care) what's real and
what's fake, what's human and what's AI. Nobody thought about social
isolation and population collapse before it started happening. I don't
regret my efforts toward creating this world because I'm sure it would have
happened without me. All I can do is study the threat so I can warn people.
And the best way to study a threat is to help create it.

-- Matt Mahoney, [email protected]

On Thu, Sep 4, 2025, 8:20 PM Rob Freeman <[email protected]> wrote:

> It strikes me you are just producing words to obfuscate the issue Matt.
>
> Some insights might apply, Wolpert's Theorem on... basically it seems to
> me to be about non-abstraction. Or alternate, contradictory abstraction. My
> theme for years. Interpreted by me as a power of chaos. But you insist on
> interpreting it in a purely negative way. You can only interpret it to mean
> that we have to keep doing whatever we're doing now.
>
> Control might be impossible, yes. But that is not creativity to you. It
> just means we have to keep doing what we're doing now.
>
> It seems to me that the true goals of the Hutter Prize are being
> re-evaluated to be whatever has turned out to work elsewhere while it was
> working on something else.
>
> Most important is where the goals of a current research agenda are driving
> us to go next. And there again the Hutter Prize seems to be failing.
> Because inspired by the Hutter Prize, it seems you can only think of
> continuing to do what is being done now. Why do anything else? You say to
> Dorian, current language models are correct right now... (correct at doing
> whatever you've decided the Hutter Prize was always about doing.)
>
> Where is your forward research agenda amid all this negativity?
>
> On Thu, Sep 4, 2025 at 11:40 PM Matt Mahoney <[email protected]>
> wrote:
>
>> People have been pursuing structured knowledge representation since the
>> 1950s but it's a dead end. The Cyc project was the biggest failure because
>> it lacked a natural language interface and a learning algorithm. More
>> recent approaches like YKY's logic systems, Ben Goertzel's
>> Webmind/Novamente/OpenCog/Hyperon and Pei Wang's NARS use hybrid systems of
>> probabilistic logic but still require expensive hand coding of knowledge,
>> which didn't seem to be happening before they went quiet on this list years
>> ago.
>>
>> The Hutter prize is not about representing knowledge. It's about
>> intelligence as defined by Turing. It would be nice if we could look at
>> giant matrices to understand what real world knowledge it represents, but
>> we can't because it violates Wolpert's theorem. The better a system can
>> predict your actions, the worse you are at predicting its actions. You can
>> have intelligence or you can have control, but not both.
>>
>> Control requires predictability. Prediction tests understanding.
>> Prediction measures intelligence. Compression measures prediction.
>>
>> -- Matt Mahoney, [email protected]
>>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
>
> <https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M6eebbd8d472bfe1f56a19a42>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M31e7b7ef0cd0f86e246271f7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to