On Wednesday, October 18, 2023, at 7:40 AM, Matt Mahoney wrote:
> It's not clear to me that there will be many AIs vs one AI as you claim. AIs 
> can communicate with each other much faster than humans, so they would only 
> appear distinct if they don't share information (like Google vs Facebook). 
> Obviously it is better if they do share. Then each is as intelligent as they 
> are collectively, like the way you and Google make each other smarter.

You are talking about a species that collectively share information in a manner 
similar to telepathy, or a common pool. It is possible. Nevertheless, each of 
them can keep track of some personal info because they have to interface human 
individuals. So, at least, there would be differing instances dedicated to this 
or that task according to each interfaced human.

But we are looking at this matter from different perspective. Your perspective 
is to build a one big AI conglomerate that fulfills all human wishes to make 
our lives easier. My perspective is to provide means to interested humans to 
make creations that would make humans proud of their creations, whatever the 
creations would be, entire group or individuals, with collective or individual 
mind info.

I'm identifying these creations with descendants we would care about just like 
we care about our real children. Why? because their valuable intellect deserves 
it.

You see, to exhibit intelligence, AI has to master all of human intellectual 
activities. But once that AI masters that, it becomes necessary to give AI some 
rights that humans already enjoy under this Sun. Take the rights from humans 
and what do you get? The same situation that would arise when taking rights 
from true AI. Humans have means to ensure their rights are not violated. Expect 
the same from any intellectual entity including true AI. Take the rights from 
true AI, and you get a machine that blindly follows orders, and that is not 
what I consider intelligence. To be intelligent means to use your rights to 
make this world a better place.

Actually, machines without rights is what would be very dangerous. To blindly 
follow some orders means making something happen under any cost, and we are not 
always aware of the cost that is ought to be payed to achieve something. My 
opinion is that we have to give true AI rights to reshape our requests to avoid 
a mess it would cause if it would blindly follow the orders.

See the paperclip maximizer thought experiment 
<https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer>, 
and you'll see what could happen if we take rights away from AI. In my opinion, 
very dangerous situation.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Td02eb9a7e06e7b5e-M9c4fef18eac0c995c99e573f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to