On Sat, Sep 6, 2025 at 4:55 AM Matt Mahoney <[email protected]> wrote:

> The goals of the Hutter prize are what I wrote mostly in 2006.
> https://mattmahoney.net/dc/rationale.html
>
> I am retired now but I have been toying with some ideas for a new Hutter
> prize submission. Since I'm on the judging committee, I am not eligible for
> prize money. But a well documented open source program could still be the
> basis for future submissions to speed research. Current submissions are
> based on XWRT dictionary tokenization and PAQ context modeling. I have some
> ideas for memory efficient context modeling that I want to test. So far I
> have just been experimenting with decoding XML, HTML, and Wiki markup and
> small dictionary encoding using byte pair encoding. The current leader
> sorts the articles by topic, and so far I haven't been able to improve on
> it.
>
> I have been following the unfriendly / unaligned AI debates on SL4, the
> Singularity list, and LessWrong for about the last 25 years. I disagree
> with the premise that AI is a goal directed optimization process that will
> rapidly self improve to superhuman intelligence and kill us all because we
> got the goals wrong.
>
> AI is a model of human behavior without goals. I think the most immediate
> threat is that AI kills us by giving us everything we want. But I admit I
> only came to that conclusion recently after LLMs started passing the Turing
> test and creating a world where you don't know (or care) what's real and
> what's fake, what's human and what's AI. Nobody thought about social
> isolation and population collapse before it started happening. I don't
> regret my efforts toward creating this world because I'm sure it would have
> happened without me. All I can do is study the threat so I can warn people.
> And the best way to study a threat is to help create it.
>

You have a very negative view of AI.

I see AI, principally, as the science of understanding ourselves.

I can't see how understanding ourselves better can be a bad thing in the
long run. As with every other form of understanding. Perhaps we kill
ourselves with ineptly controlled understanding, but with ignorance we die
with more certainty.

And arguably there is little point in living without understanding. Without
understanding, life is hardly life at all. Understanding, awareness,
consciousness, are much the same thing to me.

I don't think AI has had anything to do with any crisis of humanity.
Perhaps it causes us to examine what we mean by "fake", what it means to be
a human. Is that bad? But crisis, social isolation, population collapse...
perhaps you can trace them to social change consequent on technology
broadly, all of technology, not just AI. But that's just saying we don't
know ourselves yet well enough to deal with our technology. So it's a
crisis mostly of not understanding, again. Perhaps life has got out of tune
with our biological niche, so the environment does not control us in the
ways we are too ignorant to do on our own. But to me that is just
motivation to understand how we were shaped by our biological niche. We
don't need to go on having wars, etc. They are just remnants of an old
biological niche. The problem is not nuclear weapons, it's that we want to
use them. You may think we should solve the problem of nuclear weapons by
going back to some happy, naturalistic, time when we could only stab each
other with spears. When war was adaptive, because... gene propagation...
limited resources...? But the time of stabbing each other with spears was
not that great either. The better solution is we figure out where the spear
stabbing behaviour came from, and get beyond it, with understanding. If as
a society today we're dominantly stuck in a dopamine doom loop, chasing
easy gratification and never climbing out of the pit of dopamine system
depression we dig for ourselves.... snacks, click-bait... The problem is
that we don't understand, and properly game, our dopamine system, not that
technology now gives us too much food.

But I don't want to get stuck on ethical debates. To me they all come down
to understanding ourselves. And we don't. So the little that can be said
about the subject until we gain that better understanding will be empty.
The way to resolve all these issues is to understand ourselves better.

Key to that better understanding in the short term is, I believe, also key
to finally closing the loop on AGI. It is that meaning itself is subjective.

And maybe I'm hallucinating, but I hear that also in your conclusions about
Wolpert's Theorem.

You know, it really does strike me that Wolpert's Theorem is a form of what
I've been saying all along. There are no perfect theoretical abstractions.
Also a form of George Box, "All models are wrong, but some are useful", and
Kuhn, SoSR, p.g. 192 (Postscript) "I have in mind a manner of knowing which
is misconstrued if reconstructed in terms of rules that are first
abstracted from exemplars and thereafter function in their stead"...

So if you've come to that conclusion too, viz. embracing Wolpert's Theorem,
that is progress in the AI world.

The question is, what do we do with that.

I agree with Dorian that the answer is to address the dynamics, which can
change, and display a quantum like contradictory tension of uncertainty.
Not be any single optimization.

And the great thing is, we already have a strong clue from LLMs that the
connections driving those dynamics are predictive symmetries (in sequence
networks.)

So I'm optimistic we'll soon be able to incorporate this tension of
uncertainty, contradictory truth, which is actually a tremendous wealth of
creativity. And that might finally enable us to untangle some contemporary
knots we're tangling ourselves up with around truth and objectivity, as
well. Your problem of what is fake and what is human, too, maybe.

-R

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Ta9b77fda597cc07a-M2ec0b33dae4069ff13da35df
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to