On 02/19/2013 11:03 AM, Piaget Modeler wrote:
I'm sure this topic has been discussed before. Sorry for rehashing it
if so. I have a specific question I'd like to answer.
In designing a cognitive system, someone made a criticism that utterly
confounded me. And got me thinking.
The system receives sensory data sets from the world and transforms
them into percept propositions which it asserts to
its memory. Each percept proposition is activated when it is
asserted. Infereneces are made from these percepts.
These initial percepts and its inferences are called "Observables".
All observables can be activated, but there is only a
notion of activation.
Next, the system can predict that these observables will recur at some
point. But the prediction refers only to predicting
the re-activation of observables.
Then some one asked, where is the notion of TRUTH in your system. I
was flabbergasted. Speechless. Then I asked
well what is truth? I checked wikipedia. (
http://en.wikipedia.org/wiki/Truth )
It turns out that when someone says something is true, it means a very
many things:
a) It means that the statement is logically consistent (validity),
b) that the statement corresponds, concurs, or conforms to reality
(verity),
c) that one is sure of the statement (certainty / confidence),
d) that the statement is likely to occur rather than unlikely
(Likelihood), and
e) that we agree with the statement (agreement).
So my questions are:
(1) Is truth necessary or important to a cognitive system?
(2) Which notion of truth should a cognitive system model?
(3) How do we ascribe truth (values) to sensory input or inferences
derived from sensory input?
Your thoughts?
~PM.
------------------------------------------------------------------------------------------------------------------------------------------------
*/Confidential /*- /This message is meant solely for the intended
recipient. Please do not copy or forward this message without /
/the consent of the sender. If you have received this message in
error, please delete the message and notify the sender./
*AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/232072-58998042> |
Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
Truth is an illusion. It is the belief that what you believe to be most
likely is, in fact, inevitable.
An AI doesn't need the concept of truth...except to communicate with
people. Internally it can operate off of graded degrees of probability,
cost, benefit, etc. When communicating with people it needs to condense
that so that when something has more than a certain amount of
probability, and the benefit of asserting it is sufficiently large, and
the cost of being wrong is sufficiently small, then it synopsizes this
as proclaiming "truth". It's my belief that people operate in the same
way, though this is disguised because different people use different
constraints on things like "What is probable enough?". Also note that
the cost and the benefit are figured on the basis of the cost/benefit to
the entity proclaiming a truth rather than on those accepting it.
So perhaps we would want a sufficiently capable AI to avoid talking
about truth, and instead talk about what the probabilities are, and what
costs and benefits can be expected. It's a bit harder to understand,
but it strikes me as much safer.
--
Charles Hixson
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com