Let's think, because say we are close to AGI, and someone may have the last big 
puzzle piece, and the AGIs would be cloned thousands of times and ran 100 times 
faster, working in their brain to test changes to their code to become ASIs, 
and eating all the data etc, leading to a power that could destroy us:

Scenario 1: The discoverer tells no one else. Progress is slow but sure to be 
in good hands. The discoverer may die before completes it, same for the current 
living generation of people, but likely not the civilization. This has an 
undesired effect. And, in the time wasted, bad actors could discover the same 
discovery, telling no one else does not keep it from spawning elsewhere.

Scenario 2: The discoverer tells some seasoned ML specialist peers, in public, 
and it easily leaks out globally in months. Progress is fast but not sure to 
always be in the right hands. Even good intent people may program it wrong [if 
they don't tell others]. We have the fastest path to achieve utopia for all 
people but also to end the civilization. The people that can run it at scale 
are places like Google, so it is possible we could overpower bad actors but not 
absolute. But if Google programs it wrong without asking others, that can be a 
big issue too.

Thoughts? Is the logical thing to do to only tell a few and make sure we keep 
it secret until it is running strong in Google's buildings? Let's think of gun 
regulation, who would want to sell a gun without making sure the person has no 
criminal record and is sane? Should we check each others mindsets?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te92a6e833f53a844-M78d9b762343aa356ec3f41b1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to