Richard:> Suppose, further, that the only AGI systems that really do work are ones
in which the symbols never use "truth values" but use other stuff (for which there is no interpretation) and that the thing we call a "truth value" is actually the result of an operator that can be applied to a bunch of connected symbols. This [truth-value = external operator] idea is fundamentally different from [truth-value = internal parameter] idea, obviously.

I almost added to my last post that another reason the brain never seizes up is that its concepts (& its entire representational operations) are open-ended trees, relatively ill-defined and ill-structured, and therefore endlessly open to reinterpretation. Supergeneral concepts like "Go away," "Come here", "put this over there", or indeed "is that true?" enable it to be flexible and creatively adaptive, especially if it gets stuck - and find other ways, for example, to "go" "come," "put" or deem as "true" etc.

Is this something like what you are on about?

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74601069-e39ad4

Reply via email to