Dear Alan,

Thank you for making my point much more clearly than I managed. I'm a little wary of probabilities in situations like the one you describe, as it always seems a little hard to pin down what is meant by them. At least with the symbolic approach, you can give a short paragraph saying what you mean.

I'll try and find a paper on the "p-modals" (possible, probable, etc.) and ways of combining them tomorrow and put a paragraph on the wiki.

Matt

Alan Ruttenberg wrote:
I'm personally fond of the symbolic approach - I think it is more direct and easier to explain what is meant. It's harder to align people to a numerical system, I would think, and also provides a false sense of precision. Explanations are easier to understand as well: "2 sources thought this probable, and 1 thought is doubtful" can be grokked more easily than score: 70%

-Alan

On Feb 12, 2008, at 4:03 PM, Matt Williams wrote:


Just a quick note that the 'trust' we place in an agent /could/ be described probabilistically, but could also be described logically. I'm assuming that the probabilities that the trust annotations are likely to subjective probabilities (as we're unlikely to have enough data to generate objective probabilities for the degree of trust).

If you ask people to annotate with probabilities, the next thing you might want to do is to define a set of common probabilities (10 - 90, in 10% increments, for example).

The alternative is that one could annotate a source, or agent, with our degree of belief, chosen from some dictionary of options (probable, possible, doubtful, implausible, etc.).

Although there are some formal differences, the two approaches end up as something very similar. There is of course a great deal of work on managing conflicting annotations and levels of belief in the literature.

Matt

--http://acl.icnet.uk/~mw
http://adhominem.blogsome.com/
+44 (0)7834 899570



--
http://acl.icnet.uk/~mw
http://adhominem.blogsome.com/
+44 (0)7834 899570

Reply via email to