One other thing that Sysadmins need to understand is big O notation

and also the fact that sometimes a O(N^2) algorithm may be preferred to a O(logN) algorithm if N is small and the fixes overhead of the 'more efficient' algorithm is higher than for the dumb one.

David Lang

 On Tue, 17 Sep 2013, Dana Quinn wrote:

On Tue, Sep 17, 2013 at 1:20 PM, Derek Balling <[email protected]> wrote:


On Sep 17, 2013, at 4:18 PM, [email protected] wrote:

On Tue, Sep 17, 2013 at 11:44:22AM -0400, Doug Hughes wrote:
Oh, and P95/P99 distributions (commonly used for billing by network
carriers on MPLS)

If you're only paying attention to p95/p99 in relation to network you're
likely to be missing interesting and useful information.
e.g. knowing that on average a user gets a page returned in 2 seconds is
great, but if your p95 is out in the 30 second region that's a number of
potentially unhappy users.
I'd see the ideal goal as getting your mean as low as possible *and*
your p95/p99 as close to your mean as possible :)

From my experience p95/p99 data is about consumption not
latency/performance metrics.


There are a number people who look at web performance metrics with
percentile latency metrics.  Amazon is the most known big company that does
this, but I know that Google definitely does this as well.    Amazon talks
about it as TP50, TP90, TP99, so on.   TP for transaction percentile.

Here's a nice blog post from 37signals on the "problem with averages" -
http://37signals.com/svn/posts/1836-the-problem-with-averages

Dana

_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/
_______________________________________________
Discuss mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/discuss
This list provided by the League of Professional System Administrators
 http://lopsa.org/

Reply via email to