On 02/07/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
> Are you suggesting that the AI won't be smart enough > to understand > what people mean when they ask for a banana? It's not a question of intelligence- it's a question of selecting a human-friendly target in a huge space of possibilities. Why should the AGI care what you "meant"? You asked for a banana, and you got a banana, so what's the problem?
As proof of concept, we have humans who understand what other humans mean when they say things. This involves knowledge of language and an intuitive understanding of human psychology, allowing you to pick a few meanings out of the vast space of possible meanings. You and I do this very easily, many times a day. Why should it be more difficult for an AI, especially a superintelligent AGI, to achieve the same level of undestanding?
What? Why would an AGI whose goals are chosen completely at random even bother with self-replication? Self-replication is a rather unlikely thing for an AGI to do; the series of actions required to self-replicate are complex enough to make it very unlikely that an AGI will do them just by sheer chance.
I agree, but those who think AI's will evolve to destroy us consider the possibility that a rapacious, malevolent AI will somehow arise (whether by accident or design) and have a competitive advantage over the tame ones. -- Stathis Papaioannou ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8