>
> Although there are already 30+ companies and open-source projects with
> sFlow collectors I fully expect most memcached users will write their
> own collection-and-analysis tools once they can get this data!   Don't
> you agree?   So it's not about any one collector,   it's about
> defining a useful, scalable measurement that everyone can feel
> comfortable using,  even in production,  even on the largest clusters.
>
> On a positive note,  it does seem like there is some consensus on the
> value of random-transaction-sampling here.   But do we have agreement
> that this feed should be made available for external consumption (i.e.
> the whole cluster sends to one place that is not itself a memcached
> node),  and  that UDP should be used as the transport?   I'd like to
> understand if we are on the same page when it comes to these broader
> architectural questions.

Don't forget the original thread as well. I'm trying to solve two issues:

1) Sampling useful data out of a cluster.

2) Providing something useful for application developers

The second case is an OS X user who fires up memcached locally, writes
some rails code, then wonders what's going on under the hood. 1-in-1000
sampling there is counterproductive. Headers only is often useless.

stats cachedump is most often used for the latter, and everyone needs to
remember that users never get to 1) if they can't figure out 2). Maybe I
should flip those priorities around?

-Dormando

Reply via email to