I don't think a betting situation will be different for this case. To
really avoid intensional considerations, experiment instructions
should be so explicit that the human subjects know exactly what they
are asked to do.

I believe that, in the betting versions of the Linda-fallacy experiment,
the instructions have been extremely explicit and subjects have
understood what they were asked to do.

Even in that case, they will have difficulty in
following such a unnatural process.

This would appear to be the case, yes.

Furthermore, as far as AGI is
concerned, it is simply a bad idea to define concepts by extension
only.

I agree that inside the mind of an AGI, concepts should be defined
via both extension (members) and intension (properties).

However, keeping track of both members and properties does not
intrinsically imply making wrong bets in gambling scenarios (or, generally,
making wrong probabilistic judgments).

Rather, when a betting scenario is encountered, a combination of
intensional and extensional knowledge may be used by an intelligent
agent to estimate the odds.

I do agree that betting scenarios are odd from an everyday-human- experience
perspective, and not what our internal reasoning systems are tuned for.

-- Ben

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to