https://bugzilla.wikimedia.org/show_bug.cgi?id=54406

--- Comment #20 from Nemo <federicol...@tiscali.it> ---
I'm not sure how familiar you are with the actual writing and enacting of those
social rules, but surely here nothing was done against them. The social rules
in the various variations of the global [[m:Bot policy]] are not about
performance; the speed limit is typically driven by users not wanting their
Special:RecentChanges and Special:WatchList to be flooded, which in this case
obviously didn't happen (so let's not even start debating what's an "edit").

Besides, I don't really have an answer yet to my question, though we're getting
nearer; probably it would be faster if we avoided off-topic discussions on
alleged abuse and similar stuff.

(In reply to comment #19)
> (In reply to comment #15)
> > I still see no answer about what the spike on July 29 was: are you saying it
> > wasn't about parsoid, but just a coincidence? Or that it was normal to queue
> > over a million jobs in a couple of days (initial caching of all pages or
> > something?) but the second million was too much?
> 
> Just editing a handful really popular templates (some are used in >7 million
> articles) can enqueue a lot of jobs (10 titles per job, so ~700k jobs). As
> can
> editing content at high rates. Core happens to cap the number of titles to
> re-render at 200k, while Parsoid re-renders all, albeit with a delay.

Thanks, so I guess the answer is the second. Can you then explain why "the
second million was too much", i.e. why reaching 2M job queue is in your opinion
all fine and normal while 3 is something absolutely horrible and criminal?
Thanks.

-- 
You are receiving this mail because:
You are the assignee for the bug.
You are on the CC list for the bug.
_______________________________________________
Wikibugs-l mailing list
Wikibugs-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to