Michael Rosenthal schrieb:
> I suggest keep the bug on Wikimedia's servers and using a tool which
> relies on SQL databases. These could be shared with the toolserver
> where the "official" version of the analysis tool runs and users are
> enabled to run their own queries (so taking a tool with a good
> database structure would be nice). With that the toolserver users
> could set up their own cool tools on that data.

Well, the original problem is that wikipedia has so many page views, writing
each to a database will simply melt that database. we are talking about 50000
hits per second. this is of course also true for the toolserver.

I was thinking about a solution that uses sampling, or would only be applied to
specific pages or small projects. We hat something similar for the old 
wikicharts.

-- daniel

_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to