RE: Trust in statements (was BioRDF Brainstorming)

2008-02-14 Thread Colin Batchelor
Matt Williams writes: > I'll try and find a paper on the "p-modals" (possible, probable, etc.) > and ways of combining them tomorrow and put a paragraph on the wiki. The SEP has something here: http://plato.stanford.edu/entries/logic-modal/ Colin. DISCLAIMER: This communication (including any

Re: Trust in statements (was BioRDF Brainstorming)

2008-02-13 Thread Matt Williams
Dear Alan, Thank you for making my point much more clearly than I managed. I'm a little wary of probabilities in situations like the one you describe, as it always seems a little hard to pin down what is meant by them. At least with the symbolic approach, you can give a short paragraph saying

Re: Trust in statements (was BioRDF Brainstorming)

2008-02-12 Thread Alan Ruttenberg
I'm personally fond of the symbolic approach - I think it is more direct and easier to explain what is meant. It's harder to align people to a numerical system, I would think, and also provides a false sense of precision. Explanations are easier to understand as well: "2 sources thought t

Re: Trust in statements (was BioRDF Brainstorming)

2008-02-12 Thread Peter Ansell
On 13/02/2008, Matt Williams <[EMAIL PROTECTED]> wrote: > Just a quick note that the 'trust' we place in an agent /could/ be > described probabilistically, but could also be described logically. I'm > assuming that the probabilities that the trust annotations are likely to > subjective probabiliti

Re: Trust in statements (was BioRDF Brainstorming)

2008-02-12 Thread Adrian Walker
Hi Matt -- Another way of increasing the amount of trust is to provide explanations, in English, automatically derived from the proofs that an agent carries out. A serendipitous feature is that the explanations start out with headlines, and then go progressively into finer details. This aspect i

Trust in statements (was BioRDF Brainstorming)

2008-02-12 Thread Matt Williams
Just a quick note that the 'trust' we place in an agent /could/ be described probabilistically, but could also be described logically. I'm assuming that the probabilities that the trust annotations are likely to subjective probabilities (as we're unlikely to have enough data to generate objec