Am 30.09.2015 um 23:38 schrieb Peter F. Patel-Schneider:
> I would argue that inference-making bots should be considered only as a
> stop-gap measure, and that a different mechanism should be considered for
> making inferences in Wikidata.  I am not arguing for Inference done Just Right
> (tm).  It is not necessary to get inference perfect the first time around.
> All that is required is an inference mechanism that is examinable and maybe
> overridable.

To do that, you would have to bake the inference rules into software in the
backend software, out of community control, maintained by a small group of
people. It's contrary to the idea of letting the community define and maintain
the ontology and semantics.

We are actually experimenting with something in that direction -- checking
constraints defined on-wiki using rules written into software on the backend,
hard-coding rules that were defined by the community. It's conceivable that we
might end up doing something like that for inference, too, but it's a lot
harder, and the slippery slope away from the community model seems much steeper
to me.

When I started to think about, and work on, wikidata/wikibase, I believed doing
inference on the server would be a very useful. The longer I work on the
project, the more convinced I become that we have to be very careful with this.
Wikidata is a "social machine", cutting the community out of the loop is
detrimental in the long run, even if it would make some processes more 
efficient.


-- 
Daniel Kinzler
Senior Software Developer

Wikimedia Deutschland
Gesellschaft zur Förderung Freien Wissens e.V.

_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata

Reply via email to