There are all good points, and we certainly do need better tooling for
individual developers.

There are a lot of things a developer can do on just a laptop in terms of
profiling code, that if done consistently, could go a long way, even
without it looking anything like production.  Things like understanding if
algorithms or queries are O(n) or O(2^n), etc. and thinking about the
potential size of the relevant production data set might  be more useful at
that stage than raw numbers.  When it comes to gathering numbers in such an
environment, it would be helpful if either the mediawiki profiler could
gain an easy visualization interface appropriate for such environments, or
if we standardized around something like xdebug.

The beta cluster has some potential as a performance test bed if only it
could gain a guarantee that the compute nodes it runs on aren't
oversubscribed or that the beta virts were otherwise consistently
resourced.  By running a set of performance benchmarks against beta and
production, we may be able to gain insight on how new features are likely
to perform.

Beyond due diligence while architecting and implementing a feature, I'm
actually a proponent of testing in production, albeit in limited ways.  Not
as with test.wikipedia.org which ran on the production cluster, but by
deploying a feature to 5% of enwiki users, or 10% of pages, or 20% of
editors.  Once something is deployed like that, we do indeed have tooling
available to gather hard performance metrics of the sort I proposed, though
they can always be improved upon.

It became apparent that ArticleFeedbackV5 had severe scaling issues after
being enabled on 10% of the articles on enwiki.  For that example, I think
it could have been caught in an architecture review or in local testing by
the developers that issuing 17 database write statements per submission of
an anonymous text box that would go at the bottom of every wikipedia
article was a bad idea.  But it's really great that it was incrementally
deployed and we could halt its progress before the resulting issues got too
serious.

That rollout methodology should be considered a great success.  If it can
become the norm, perhaps it won't be difficult to get to the point where we
can have actionable performance standards for new features, via a process
that actually encourages getting features in production instead of being a
complicated roadblock.

On Fri, Mar 22, 2013 at 1:20 PM, Arthur Richards <aricha...@wikimedia.org>wrote:

> Right now, I think many of us profile locally or in VMs, which can be
> useful for relative metrics or quickly identifying bottlenecks, but doesn't
> really get us the kind of information you're talking about from any sort of
> real-world setting, or in any way that would be consistent from engineer to
> engineer, or even necessarily from day to day. From network topology to
> article counts/sizes/etc and everything in between, there's a lot we can't
> really replicate or accurately profile against. Are there plans to put
> together and support infrastructure for this? It seems to me that this
> proposal is contingent upon a consistent environment accessible by
> engineers for performance testing.
>
>
> On Thu, Mar 21, 2013 at 10:55 PM, Yuri Astrakhan
> <yastrak...@wikimedia.org>wrote:
>
> > API is fairly complex to meassure and performance target. If a bot
> requests
> > 5000 pages in one call, together with all links & categories, it might
> take
> > a very long time (seconds if not tens of seconds). Comparing that to
> > another api request that gets an HTML section of a page, which takes a
> > fraction of a second (especially when comming from cache) is not very
> > useful.
> >
> >
> > On Fri, Mar 22, 2013 at 1:32 AM, Peter Gehres <li...@pgehres.com> wrote:
> >
> > > From where would you propose measuring these data points?  Obviously
> > > network latency will have a great impact on some of the metrics and a
> > > consistent location would help to define the pass/fail of each test. I
> do
> > > think another benchmark Ops "features" would be a set of
> > > latency-to-datacenter values, but I know that is a much harder taks.
> > Thanks
> > > for putting this together.
> > >
> > >
> > > On Thu, Mar 21, 2013 at 6:40 PM, Asher Feldman <afeld...@wikimedia.org
> > > >wrote:
> > >
> > > > I'd like to push for a codified set of minimum performance standards
> > that
> > > > new mediawiki features must meet before they can be deployed to
> larger
> > > > wikimedia sites such as English Wikipedia, or be considered complete.
> > > >
> > > > These would look like (numbers pulled out of a hat, not actual
> > > > suggestions):
> > > >
> > > > - p999 (long tail) full page request latency of 2000ms
> > > > - p99 page request latency of 800ms
> > > > - p90 page request latency of 150ms
> > > > - p99 banner request latency of 80ms
> > > > - p90 banner request latency of 40ms
> > > > - p99 db query latency of 250ms
> > > > - p90 db query latency of 50ms
> > > > - 1000 write requests/sec (if applicable; writes operations must be
> > free
> > > > from concurrency issues)
> > > > - guidelines about degrading gracefully
> > > > - specific limits on total resource consumption across the stack per
> > > > request
> > > > - etc..
> > > >
> > > > Right now, varying amounts of effort are made to highlight potential
> > > > performance bottlenecks in code review, and engineers are encouraged
> to
> > > > profile and optimize their own code.  But beyond "is the site still
> up
> > > for
> > > > everyone / are users complaining on the village pump / am I ranting
> in
> > > > irc", we've offered no guidelines as to what sort of request latency
> is
> > > > reasonable or acceptable.  If a new feature (like aftv5, or flow)
> turns
> > > out
> > > > not to meet perf standards after deployment, that would be a high
> > > priority
> > > > bug and the feature may be disabled depending on the impact, or if
> not
> > > > addressed in a reasonable time frame.  Obviously standards like this
> > > can't
> > > > be applied to certain existing parts of mediawiki, but systems other
> > than
> > > > the parser or preprocessor that don't meet new standards should at
> > least
> > > be
> > > > prioritized for improvement.
> > > >
> > > > Thoughts?
> > > >
> > > > Asher
> > > > _______________________________________________
> > > > Wikitech-l mailing list
> > > > Wikitech-l@lists.wikimedia.org
> > > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > > _______________________________________________
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> > >
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
>
>
>
> --
> Arthur Richards
> Software Engineer, Mobile
> [[User:Awjrichards]]
> IRC: awjr
> +1-415-839-6885 x6687
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to