On a broader brainstorming note, it would be nice to have a way of
specifying that  a certain dc:Agent thinks that one is a better
annotation than the other also, with the user deciding to trust
certain Agents to give them useful knowledge, or inversely, to not
trust specific Agents who they find do not annotate things well.

I am not sure if we will see fine-grained trust metrics in the immediate future. Rather, I would expect that 'trustworthiness' and 'annotation source' will be conflated, similar to how we currently judge the trustworthiness of a web page according to the domain it is residing on (nature.com > wikipedia.org > scientology.org), or how we judge an article based on the journal it has been published in (Nature Neuroscience > some sub-standard journal).

Each article will be annotated in multiple locations with different trustworthiness and accuracy. If Elsevier would decide to allow users to submit structured digital abstracts, then those annotations created by authors and residing on the website of the journals would probably be very trustworthy. If Nature Connotea decides to let users tag scientific articles in the same form, then the trustworthiness of these annotations created by readers would be a bit lower. If a NLP group at the European Bioinformatics Institute or at Science Commons makes textmining results available, the trustworthiness would also be lower (because NLP techniques have a lower accuracy). The number of trustworthy annotation sources will probably be relatively managable in the near future, and we may find that we do not actually need to represent trust metrics in a more fine-grained way (e.g. based on RDF representations of 'annotation agents'). This has the huge advantage that we do not need to worry about authentication mechanisms for such 'annotation agents'.

Cheers,
Matthias Samwald




Reply via email to