Matt Mahoney wrote:
Richard, I have no doubt that the technological wonders you mention will all
be possible after a singularity.  My question is about what role humans will
play in this.  For the last 100,000 years, humans have been the most
intelligent creatures on Earth.  Our reign will end in a few decades.

Who is happier? You, an illiterate medieval servant, or a frog in a swamp? This is a different question than asking what you would rather be. I mean
happiness as measured by an objective test, such as suicide rate.  Are you
happier than a slave who does not know her brain is a computer, or the frog
that does not know it will die?  Why is depression and suicide so prevalent in
humans in advanced countries and so rare in animals?

Does it even make sense to ask if AGI is friendly or not?  Either way, humans
will be simple, predictable creatures under their control.  Consider how the
lives of dogs and cats have changed in the presence of benevolent humans, or
cows and chickens given malevolent humans.  Dogs are confined, well fed,
protected from predators, and bred for desirable traits such as a gentle
disposition.  Chickens are confined, well fed, protected from predators, and
bred for desirable traits such as being plump and tender.  Are dogs happier
than chickens?  Are they happier now than in the wild?  Suppose that dogs and
chickens in the wild could decide whether to allow humans to exist.  What
would they do?

What motivates humans, given our total ignorance, to give up our position at
the top of the food chain?

Matt,

Why do say that "Our reign will end in a few decades" when, in fact, one of the most obvious things that would happen in this future is that humans will be able to *choose* what intelligence level to be experiencing, on a day to day basis? Similarly, the AGIs would be able to choose to come down and experience human-level intelligence whenever they liked, too.

(There would be some restrictions: if you go up to superintelligence, you would not be able to keep the aggressive (etc.) motivations that we have .... but you could have these back again as soon as you come back down to a less powerful level. The only subjective effect of this would be that you would feel relaxed and calm while up at the higher levels, not experiencing any urges to dominate, etc.)

There is no doubt whatsoever that this would be a major part of this future, so how could anyone say that "we" would be gone, or that "we" would no longer be the most intelligent creatures on earth? "We" and the AGIs would be at the same level, with the ONE difference being that there would be a supervisory mechanism set up to ensure that, for the safety of everyone, no creature (AGI or human) would be allowed to spend any time at the more powerful levels of intelligence with an aggressive motivation system operational.

Every one of your "humans = pets" or "humans = slaves" analogies are thus completely irrelevant.

There is no comparison whatsoever between the status of pets, or slaves, or ignorant peasants, in our society (in all the societies that have ever existed in human history) and the situation that would exist in this future. Everything about the "inferior" status of pets, slaves etc. would be inapplicable.

As a practical matter, I suspect that people will spend a lot of their time in a state in which they did NOT know everything, but that would just be a lifestyle choice. I am sure people will do many, many different things, and explore many options, but the simple idea that they would be "slaves" or "pets" of the AGIs is just comical.

There is much more that could be said about this, but the basic point is unarguable: if the AGIs are assumed to start out as SAFAIs (which is the basic premise of this discussion) then we would have equal status with them. (We would have the *option* of having equal status: I am sure some people will choose not to take that option, and just stay as they are).




Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57559801-45f1d6

Reply via email to