Kaj Sotala wrote:
For the recent week, I have together with Tom McCabe been collecting
all sorts of objections that have been raised against the concepts of
AGI, the Singularity, Friendliness, and anything else relating to
SIAI's work. We've managed to get a bunch of them together, so it
seemed like the next stage would be to publicly ask people for any
objections we may have missed.

...
Well, it could be that in any environment there is an optimal level of intelligence, and that possessing more doesn't yield dramatically improved results, but does yield higher costs. This is, of course, presuming that intelligence is a unitary kind of thing, which I doubt, but a more sophisticated argument along the same lines could argue that there is an optimum in each dimension of intelligence.

This argument *could* even be correct. It is, however, worth noting that an AI would live in a drastically different environment than does a human. As a result it's benefits and costs can be expected to be quite different. This doesn't invalidate the argument, but it does imply the existence of some sort of bounds.


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=80209462-9ed3a9

Reply via email to