Abram Demski wrote:
Ben,
...
One reasonable way of avoiding the "humans are magic" explanation of
this (or "humans use quantum gravity computing", etc) is to say that,
OK, humans really are an approximation of an ideal intelligence
obeying those assumptions. Therefore, we cannot understand the math
needed to define our own intelligence. Therefore, we can't engineer
human-level AGI. I don't like this conclusion! I want a different way
out.

I'm not sure the "guru" explanation is enough... who was the Guru for Humankind?

Thanks,

--Abram

You may not like "Therefore, we cannot understand the math needed to define our own intelligence.", but I'm rather convinced that it's correct. OTOH, I don't think that it follows from this that humans can't build a better than human-level AGI. (I didn't say "engineer", because I'm not certain what connotations you put on that term.) This does, however, imply that people won't be able to understand the "better than human-level AGI". They may well, however, understand parts of it, probably large parts. And they may well be able to predict with fair certitude how it would react in numerous situations. Just not in numerous other situations.

The care, then, must be used in designing so that we can predict favorable motivations behind the actions in important-to-us areas. Even this is probably impossible in detail, but then it doesn't *need* to be understood in detail. If you can predict that it will make better choices than we can, and that it's motives are benevolent, and that it has a good understanding of our desires...that should suffice. And I think we'll be able to do considerably better than that.



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to