candice schuster wrote:
Ok Richard, let's talk about your scenario...define 'Mad', 'Irrational' and 'Improbable scenario' ? Let's look at some of Charles Darwin's and Richard Dawkins theories as some examples in this improbable scenario. Ever heard of a Meme ? or the Memetic Theory ? I would like to say that based on these and the very probable scenario of AI combined with cognitive thought processes the chances of 'survival of the fittest' starts to look very rosy indeed. Remember it's one Superpower against another, whether that be man against man or country against country. Furthermore you need to look at the bigger picture...the ultimate goal here is as A. Yost mentioned, dollar signs and Super Power for the organisations involved. Candice

Yup, know all about memes.

Suppose that the first AGI is completely friendly (grant me that assumption), and that it is encouraged to quickly self-improve until it can think at a thousand times human speed.

It says: "In order to ensure that the world stays safe, I need to take action now to modify all the other AGI projects on the planet, to make sure that they all have exactly the same motivation system that I have, and to ensure that no further developments in the future will lead to any unfriendly, malevolent systems. I have the technology to do this quietly, and peacefully, without harming the knowledge of these other systems (to the extent that they have knowledge or self-awareness at all). Would you like me to go ahead and do this?"

If you say "yes", nothing much will appear to happen, but from that point on all the rest of the AGI systems in the world will act, in effect, as one large friendly system.

Now, here is the response to your specific question: from that point on, there is not one, single aspect of evolutionary systems that applies any more. There are no darwinian pressures, no gene reassortment, no meme propagation, no commercial pressures, no genotype/phenotype distinctions, nothing. Every single aspect of the huge mechanism that we call "evolution" or "survival of the fittest" does not apply. All of that stuff that has been going on with the bits of DNA competing with one another to make higher and higher organisms to serves their needs -- all completely and utterly irrelevant to the further development of AGI systems.

So, you can cite evolutionary pressures, but you have to be very precise about what context your are imagining them to be operating in: after the first AGI is created, it all becomes irrelevant.

Before the first AGI is created, there are still some pressures, but I have given some reasons (in a separate post) why we coould still be in a situation where most of those pressures are either nullified or simply will not have any time to operate.

Granted, there are assumptions in this scenario .... but we should be talking about those assumptions explicitly, and in enormous detail, rather than trying to shoot down ideas about the future by simply citing the pressures of evolution or commercial competition. When we discuss the underlying details we determine whether or not any of those "evolutionary" considerations even have a chance of playing a role, so we cannot shoot down the arguments by using the idea of evolution as a weapon.



Richard Loosemore.


P.S. Sorry that I seemed so testy last night: should have been more diplomatic to both yourself and A Yost. I had just spent a few days going over basic arguments that are decades old, for the Nth time, for the benefit of people who had not seen them before, and the shear pointlessness of the effort just started to me.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56733679-3eb985

Reply via email to