On Mon, 2012-07-30 at 15:16 +0300, Tamar Fraenkel wrote:
> How do you make this calculation?

It seems I did make a mistake somewhere before (or I mistyped it) - it
should have been 2.7%, not 2.8%.


You're sending read requests to RF servers, and hoping for a response
from CL of them within the time.

For N=2, CL=1 - the probability of both hitting the worst 10% latency is
0.1 * 0.1 = 1%

For N=3, C=2 - the probability of two of the three servers hitting the
worst 10% latency is (0.9 * (0.1 * 0.1) ) + (0.1 * ((0.1 * 0.9) + (0.9 *
0.1))) = 2.7%


Tim




> 
> Thanks,
> Tamar Fraenkel 
> Senior Software Engineer, TOK Media 
> 
> Inline image 1
> 
> ta...@tok-media.com
> Tel:   +972 2 6409736 
> Mob:  +972 54 8356490 
> Fax:   +972 2 5612956 

> 

> 
> On Mon, Jul 30, 2012 at 3:14 PM, Tim Wintle <timwin...@gmail.com>
> wrote:
>         On Mon, 2012-07-30 at 14:40 +0300, Tamar Fraenkel wrote:
>         > Hi!
>         > To clarify it a bit more,
>         > Let's assume the setup is changed to
>         > RF=3
>         > W_CL=QUORUM (or two for that matter)
>         > R_CL=ONE
>         
>         > The setup will now work for both read and write in case of
>         one node
>         > failure.
>         > What are the disadvantages, other than the disk space needed
>         to
>         > replicate everything trice instead of twice? Will it affect
>         also
>         > performance?
>         
>         
>         (I'm also running RF2, W_CL1, R_CL1 atm - so this is
>         theoretical)
>         
>         As I understand it, the most significant performance hit will
>         be to the
>         variation in response time.
>         
>         For example with R_CL1, (roughly) 1% of requests will be take
>         more than
>         the worst 10% of server response times. With R_CL=QUORUM 2.8%
>         of
>         requests will have the same latency. (assuming I've just
>         calculated that
>         right)
>         
>         Tim
>         
>         
>         >
>         >
>         
>         
> 
> 


Reply via email to