On 8/29/08, David Hart <[EMAIL PROTECTED]> wrote:
>
>
> The best we can hope for is that we participate in the construction and
> guidance of future AGIs such they they are able to, eventually, invent,
> perform and carefully guide RSI (and, of course, do so safely every single
> step of the way without exception).
>

I'm surprised that no one jumped on this this statement, because it begs the
question 'what is the granularity of a step?' (an action)

The lower limit for the granularity of an action could conceivably be a
single instruction in a quantum molecular assembly language, while the upper
limit could be 'throwing the switch' on an AGI that is known to contain
modifications outside of safety parameters.

If I grok Ben's PreservationOfGoals paper, one implication is that it's
desirable to figure out how to determine the maximum safe limit for the size
(granularity) of all actions such that no action is likely to break
maintenance of the system's goals (where presumably,
friendliness/helpfulness is one of potentially many goals under
maintenance). An AGI working within such a safety framework would experience
self-imposed constraints on its actions, to the degree that may of the
god-like AGI powers imagined in popular fiction may be provably
unconscionable.

-dave



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to