The problem wasn't visible before, so nothing suddenly got broken.

Last sunday I counted at one interval 600 edits/min and a lot of those was
across several client sites. If I remember correct Daniel said a week or
two ago that we could handle about 7K changes per minute. If our present
edits are split on individual sites, then it is not unlikely we are simply
way above what we can handle.

Simplest way to solve this is to use a queue order that works for the bots.

Implement a maxlag for change dispatching and set it to 5 minutes as
default for bots, or something like that. All editing modules should check
the maxlag. A bot that hits the maxlag should incrementally add some delay
to its edits, and then slow down. A successful edit allow the bot to
decrease the delay. This is slightly different to how other modules are
doing this. All bots will then adapt to a maximum throughput with an
acceptable lag.

An alternative is to not service an edit request before the delay (or some
delay) is imposed serverside. There are several variations.

A version to enforce the delay is to serve back a special token, a waiting
ticket, the bot can use to get its request handled in those cases. Without
the token the bot will sit on wait forever.

If more iron is added later the throughput will increase and the maxlag
will be hit more and more seldom.
_______________________________________________
Wikidata-l mailing list
Wikidata-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l

Reply via email to