On Tue, Aug 12, 2025, 10:10 PM James Bowery <[email protected]> wrote:

>
>
> On Tue, Aug 12, 2025 at 3:02 PM Matt Mahoney <[email protected]>
> wrote:...
>
>> ..."you" control people using either positive or negative reinforcement.
>> Traditionally, "governments" controlled people using threats of
>> punishment, but there is a centuries long trend towards less cruelty and
>> higher legal costs. "AI" makes it easy to use reward instead, predicting
>> what you want, lowering the cost, and selling it to you.
>>
>
> Agents can be thought of as control systems and control systems have
> utility functions and you've mentioned 3 distinct agents:
>
> "you"
> "governments"
> "AI"
>
> I suppose you left off one other:
>
> "we"
>
> It is ok to conflate all of these so long as there are no distinct
> identities hence there is only one agent hence there is only one utility
> function.
>
> But I would submit that is insane.
>

The closest we have to a universal utility function is money. People,
governments, and AI all pursue it. I recognize that individual people
either have different goals or different strategies for pursuing the same
goal (because the most rational strategy is AIXI, which is not computible).
Otherwise there would be no trade and no economy.

Of course this is a bad idea. If the AI is smarter than you then it gets
all the money and humans get nothing.

But that's not the problem we should be worrying about. It's not the AI's
goal that's the problem. It's human goals. Once AI gives you everything you
want, you won't need people any more. Nobody will know or care that you
exist. Then what happens after the AI convinces you to spend all your money
and then no longer has any use for you?

-- Matt Mahoney, [email protected]

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te0af3a0c35a03987-M17ad2f322ef9e88067ec17ff
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to