Artificial Stupidity wrote:

Who cares? Really, who does? You can't create an AGI that is friendly or unfriendly. It's like having a friendly or unfriendly baby. How do you prevent the next Hitler, the next Saddam, the next Osama, and so on and so forth? A friendly society is a good start. Evil doesn't evolve in the absence of evil, and good doesn't come from pure evil either. Unfortunately, we live in a world that has had evil and good since the very beginning of time, thus an AGI can choose to go bad or good, but we must realize that there will not be one AGI being, there will be many, and some will go good and some will go bad. If those that go bad are against human and our ways, the ones that are "good", will fight for us and be on our side. So a future of man vs machine is just not going to happen. The closest thing that will happen will be Machines vs (Man + Machines). That's it. With that said, back to work!

This is just wrong: it is based on a complete misunderstanding (not to say distortion) of what AGI actually involves.

You are not talking about AGI at all, you are talking about a Straw Man version of that idea, with no connection to reality.

Good and evil exist precisely because of the mechanisms lurking under the surface of the human brain: the lower regions contain primitive mechanisms that *cause* angry reactions in the thinking part of the brain, even when that thinking part would prefer not to be angry. if you dispute this, explain why.

I would argue (though I agree it is hard to prove conclusively at the moment, due to paucity of data), that in those cases where human beings have lost those aggressive parts, they do not think less or become more stupid: they simply become non-aggressive.

Evil is caused by these mechanisms, which were put there by evolution as a way to get species to compete with one another. There is no reason to put them into an AGI, and plenty of reasons not to.

If you disagree with this, that is fine (debate is welcome) but you have to be informed of the details of these arguments in order to make sensible statements about it.

The fact is that a *proper* AGI design would be constructed in such a way as to ensure that it was a million times less capable of evil than even the most peaceful, benign, saint-like human being that has ever existed.

Such a machine could easily be built to be friendly. It could *never* in a billion, billion years just go out and "choose to go bad or good", any more than the Sun could suddenly change into a pink and green cube.

Do you care so much about being right, and about hating ideas that you disagree with, that you would fight against something that really was genuinely Good, while all the time believing (wrongly) that it was Evil? Just how much does the truth matter, here, in this debate that is so important?

Do you think this is an issue that should be discussed, or is your personal goal only to state your opinion and walk away? Discussion involves the technical details. Anything less is meaningless.



Richard Loosemore

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=45432020-3a70ca

Reply via email to