On 03/06/2008 12:55 PM,, Mark Waser wrote:
Mark, how do you intend to handle the friendliness obligations of the
AI towards vastly different levels of intelligence (above the
threshold, of course)?
Ah. An excellent opportunity for continuation of my previous post
rebutting my personal conversion to computronium . . . .
First off, my understanding of the common usage of the word
intelligence should be regarded as a subset of the attributes
promoting successful goal-seeking. Back in the pre-caveman days,
physical capabilities were generally more effective as goal-seeking
attributes. These days, social skills are often arguably equal or
more effective than intelligence as goal-seeking attributes. How do
you feel about how we should handle the friendliness obligations
towards vastly different levels of social skill?
My point here is that you have implicitly identified intelligence as a
"better" or "best" attribute. I am not willing to agree with that
without further convincing. As far as I can tell, someone with
sufficiently large number of hard-coded advanced social skill reflexes
(to prevent the argument that social skills are intelligence) will run
rings around your average human egghead in terms of getting what they
want. What are that person's obligations towards you? Assuming that
you are smarter, should their adeptness at getting what they want
translate to reduced, similar, or greater obligations to you? Do
their obligations change more with variances in their social adeptness
or in your intelligence?
Or, what about the more obvious question of the 6'7" 300 pound guy on
a deserted tropical island with a wimpy (or even crippled) brainiac?
What are their relative friendliness obligations?
I would also argue that the "threshold" can't be measured solely in
terms of intelligence (unless you're going to define intelligence
solely as goal-seeking ability, of course).
I agree that more than intelligence should be required to be considered
(and you've made clear in other posts that they must be Friendly to be
deserving), but intelligence is certainly one of the more important
qualities. What I was trying to ask though was about friendliness
obligations to vastly different levels of intelligence between species
-- say alien intelligences that are to us as we are to dogs -- in the
case where the two species have mutually exclusive goals, each of which
has costs (perhaps disastrous) to the other.
And more generally, how is this all to be quantified? Does your paper go
into the math?
joseph
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com