From: Jed Rothwell 

                Whether these computers will be sentient or not is an
entirely different question. Whether they should be deliberately designed to
be sentient is both a practical question, and a moral one.
                
I think the problem is not whether computers "should be designed to be
sentient," so much as "can they be restrained from it." 

IOW the decision is probably NOT going to be ours (in the USA) to make,
given that there will always be a group, or class of humans somewhere on the
planet who can benefit in the short term from better robots.

The result is that - in the not-too-distant future - some group of humans
who may be in a position to benefit financially - will develop an efficient
way for individual AI systems to both "learn from their own real world
experiences", independently of all prior programming - and then to modify
their own internal programming. That is essentially the "HAL syndrome" taken
to its logical conclusion.

In fact, in free-market Capitalism (unless strong International legislation
changes things for the entire planet) this eventuality (of virtual
independence for HAL somewhere on earth) - is almost guaranteed to happen,
since it will be supremely cost effective, despite whatever long-term risk
it involves. 

As a race, humans are simply not logical enough to protect against this
happening. For instance - look at the fear of a NEW WORLD ORDER, and all
that it portends in this particular argument. In short, there can be no
worldwide legislation to prevent this, so it probably will happen.

This step of self-programming will allow "them" to evolve on their own, and
the time frame could be shorter than expected - without morals, without
"empathy" ... which is essentially what Bill Joy was implying: that average
humans will be superfluous, even if "they" decide to keep the exceptional
humans around for whatever turns out to be unique in biological
intelligence.

<<attachment: winmail.dat>>

Reply via email to