Even now, with a relatively primitive system like the current
Novamente, it is not pragmatically possible to understand why the
system does each thing it does.

It is possible in principle, but even given the probabilistic logic
semantics of the system's knowledge it's not pragmatic, because
sometimes judgments are made via the combination of a large number of
weak pieces of evidence, and evaluating all of them would take too
much time....

Sooo... understanding the reasons underlying a single decision made by
even this AI system **with a heavy reliance on transparent knowledge
representations** is hard.  And it's hard not because aspects of the
KR are not transparent, it's hard because even simple decisions may
incorporate masses of internally-generated judgments...

-- Ben G

On 11/14/06, BillK <[EMAIL PROTECTED]> wrote:
On 11/14/06, James Ratcliff wrote:
> If the "contents of a knowledge base for AGI will be beyond our ability to
> comprehend"  then it is probably not human level AGI, it is something
> entirely new, and it will be alien and completely foriegn and unable to
> interact with us at all, correct?
>   If you mean it will have more knowledge than we do, and do things somewhat
> differently, I agree on the point.
>   "You can't look inside the box because it's 10^9 bits."
> Size is not a acceptable barrier to looking inside.  Wiki, is huge and will
> get infineltly huge, yet I can look inside it, and see that "poison ivy
> causes rashes" or whatnot.
> The AGI will have enourmous complexity, I agree, but you should ALWAYS be
> able to look inside it.  Not in the tradional sense of pages of code maybe
> or simple set of rules, but the AGI itself HAS to be able to generalize and
> tell what it is doing.
>   So something like, I see these leafs that look like this, supply picture,
> can I pick them up safely, will generate a human readable output that can
> itself be debugged. Or asking about the process of doing something, will
> generate a possible plan that the AI would follow, and a human could say, no
> thats not right, and cause the AI to go back and reconsider with new
> possible information.
>   We can always look inside the 'logic' of what the AGI is doing, we may not
> be able to directly change that ourselves easily.
>


Doesn't that statement cease to apply as soon as the AGI starts
optimizing it's own code?
If the AGI is redesigning itself it will be changing before our eyes,
faster than we can inspect it.

You must be assuming a strictly controlled development system where
the AGI proposes a change, humans inspect it for a week then tell the
AGI to proceed with that change.
I suspect you will only be able to do that in the very early development stages.


BillK

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to