Hi folks,

One item that comes up pretty frequently in our regular conversations
with the Wikidata folks is the question of how change propagation
should work.  This email is largely directed at the relevant folks in
WMF's Ops and Platform Eng groups (and obviously, also the Wikidata
team), but I'm erring on the side of distributing too widely rather
than too narrowly.  I originally asked Daniel to send this (earlier
today my time, which was late in his day), but decided that even
though I'm not going to be as good at describing the technical details
(and I'm hoping he chimes in), I know a lot better what I was asking
for, so I should just write it.

The spec is here:
https://meta.wikimedia.org/wiki/Wikidata/Notes/Change_propagation#Dispatching_Changes

The thing that isn't covered here is how it works today, which I'll
try to quickly sum up.  Basically, it's a single cron job, running on
hume[1].  So, that means that when a change is made on wikidata.org,
one has to wait for this job to get around to running before the item.
 It'd be good for someone from the Wikidata team to

We've declared that Good Enough(tm) for now, where "now" is the period
of time where we'll be running the Wikidata client on a small number
of wikis (currently test2, soon Hungarian Wikipedia).

The problem is that we don't have a good plan for a permanent solution
nailed down.  It feels like we should make this work with the job
queue, but the worry is that once Wikidata clients are on every single
wiki, we're going to basically generate hundreds of jobs (one per
wiki) for every change made on the central wikidata.org wiki.

Guidance on what a permanent solution should look like?  If you'd like
to wait for Daniel to clarify some of the tech details before
answering, that's fine.

Rob

[1]  http://wikitech.wikimedia.org/view/Hume

_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to