I'm confused by this from today's Wikidata weekly summary:

   - New request for comments: Semi-automatic Addition of References to
   Wikidata Statements - feedback on the Primary Sources Tool
   
<https://www.wikidata.org/wiki/Wikidata:Requests_for_comment/Semi-automatic_Addition_of_References_to_Wikidata_Statements>

First of all, the title makes no sense because "semi-automatic addition of
references to Wikidata statements" is one of the main things that the tool
can't currently do. You'll almost always end up with duplicate statements
if there's an existing statement, rather than the desired behavior of just
adding the statement.

Second, I'm not sure who "Hjfocs" is (why does everyone have to make up
fake wikinames?), but why are they asking for more feedback when there's
been *ample* feedback already? There hasn't been an issue with getting
people to test the tool or provide feedback based on the testing. The issue
has been with getting anyone to *act* on the feedback. Everything is a)
"too hard," or b) "beyond our resources," or depends on something in
category a or b, or is incompatible with the arbitrary implementation
scheme chosen, or some other excuse.

We're 12-18+ months into the project, depending on how you measure, and not
only is the tool not usable yet, but it's no longer improving, so I think
it's time to take a step back and ask some fundamental questions.

- Is the current data pipeline and front end gadget the right approach and
the right technology for this task? Can they be fixed to be suitable for
users?
- If so, should Google continue to have sole responsibility for it or
should it be transferred to the Wikidata team or someone else who'll
actually work on it?
- If not, what should the data pipeline and tooling look like to make
maximum use of the Freebase data?

The whole project needs a reboot.

Tom
_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata

Reply via email to