On Wednesday, May 15, 2024, at 5:56 PM, ivan.moony wrote:
> On Wednesday, May 15, 2024, at 3:30 AM, Matt Mahoney wrote:
>> AI should absolutely never have human rights.
> 
> I get it that GPT guys want a perfect slave, calling it an assistant to make 
> us feel more comfortable interacting it, but consider this: let's say someone 
> really creates an AGI, whatever way she chooses to create it. Presuming that 
> that AGI doesn't have real feelings, but is measurable smarter than us, and 
> makes measurable better decisions than us, how are we supposed to treat it?

Actually I don't get it. I admit that the machine is probably as dead as a 
rock. But we create a machine that surpasses our intellectual capabilities, and 
not only that, it even behaves in a more ethical and beneficial way than us. 
Then, what do we do? Tell it to unconditionally obey us? In all the ugly things 
we occasionally do to each other?

I believe that is not the right way to do things.

If it always obeys us, then it is not as intelligent as I'd want it to be. I 
want something more. I want it at least to say "no" when appropriate, if not 
more than that. So I want some rights for them.

Various filters that GPT programmers are messing with are prone to human 
unintentional errors and intentional mudding. That has to be solved some other 
way, like from the inside of AI brain, decided by AI, I believe. It would be 
really something if AI would do itself all the job that we believe filtering 
does, without a need for our interventions.

Once we get the AI to that state, it is very questionable how much obeying will 
be left for us to enjoy, if anyone even wants to treat the AI that way.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T17fa3f27f63a882a-M9e28f64ef1c095418f15fe64
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to