Mike Tintner wrote:
Richard:> Suppose, further, that the only AGI systems that really do
work are ones
in which the symbols never use "truth values" but use other stuff (for
which there is no interpretation) and that the thing we call a "truth
value" is actually the result of an operator that can be applied to a
bunch of connected symbols. This [truth-value = external operator]
idea is fundamentally different from [truth-value = internal
parameter] idea, obviously.
I almost added to my last post that another reason the brain never
seizes up is that its concepts (& its entire representational
operations) are open-ended trees, relatively ill-defined and
ill-structured, and therefore endlessly open to reinterpretation.
Supergeneral concepts like "Go away," "Come here", "put this over
there", or indeed "is that true?" enable it to be flexible and
creatively adaptive, especially if it gets stuck - and find other ways,
for example, to "go" "come," "put" or deem as "true" etc.
Is this something like what you are on about?
Well, I agree that a true AGI will need this kind of flexibility.
That wasn't the issue I was addressing in the above quote, but by itself
it is true that you need this. It is easy to get this flexibility in an
AGI: it is just that AGI developers tend not to make that a priority,
for a variety of reasons.
Richard Loosemore
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74689203-45c79a