h. Our 50th percentile was
>> great under 3ms.
>>
>> Any suggestion is very much appreciated.
>>
>> Thanks.
>> -Wei
>>
>> - Original Message -
>> From: "aaron morton"
>> To: "Cassandra User"
>> Sent: Thursday
lient side and we see the 95th
> percentile response time averages at 40ms which is a bit high. Our 50th
> percentile was great under 3ms.
>
> Any suggestion is very much appreciated.
>
> Thanks.
> -Wei
>
> - Original Message -
> From: "aaron morton"
th percentile was
> great under 3ms.
>
> Any suggestion is very much appreciated.
>
> Thanks.
> -Wei
>
> - Original Message -
> From: "aaron morton"
> To: "Cassandra User"
> Sent: Thursday, February 21, 2013 9:20:49 AM
> Subject:
bit high. Our 50th percentile was
great under 3ms.
Any suggestion is very much appreciated.
Thanks.
-Wei
- Original Message -
From: "aaron morton"
To: "Cassandra User"
Sent: Thursday, February 21, 2013 9:20:49 AM
Subject: Re: Mutation dropped
> What does rpc_ti
s.
> -Wei
>
> From: aaron morton
> To: user@cassandra.apache.org
> Sent: Tuesday, February 19, 2013 7:32 PM
> Subject: Re: Mutation dropped
>
>> Does the rpc_timeout not control the client timeout ?
> No it is how long a node will wait for a response from other nodes before
>
umber, say 20ms?
Thanks.
-Wei
From: aaron morton
To: user@cassandra.apache.org
Sent: Tuesday, February 19, 2013 7:32 PM
Subject: Re: Mutation dropped
Does the rpc_timeout not control the client timeout ?No it is how long a node
will wait for a response from
; which is configurable to control the replication timeout between nodes ? Or
> the same param is used to control that since the other node is also like a
> client ?
>
>
>
> From: aaron morton [mailto:aa...@thelastpickle.com]
> Sent: 17 February 2013 11:26
> To: user@ca
...@thelastpickle.com]
Sent: 17 February 2013 11:26
To: user@cassandra.apache.org
Subject: Re: Mutation dropped
You are hitting the maximum throughput on the cluster.
The messages are dropped because the node fails to start processing them before
rpc_timeout.
However the request is still a success because the
You are hitting the maximum throughput on the cluster.
The messages are dropped because the node fails to start processing them before
rpc_timeout.
However the request is still a success because the client requested CL was
achieved.
Testing with RF 2 and CL 1 really just tests the disks on
Hi - Is there a parameter which can be tuned to prevent the mutations from
being dropped ? Is this logic correct ?
Node A and B with RF=2, CL =1. Load balanced between the two.
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 10.x.x.x
...@thelastpickle.com]
> Sent: Monday, March 05, 2012 11:07 PM
> To: user@cassandra.apache.org
> Subject: Re: Mutation Dropped Messages
>
> I increased the size of the cluster also the concurrent_writes parameter.
> Still there is a node which keeps on dropping the mutation m
Subject: Re: Mutation Dropped Messages
I increased the size of the cluster also the concurrent_writes parameter. Still
there is a node which keeps on dropping the mutation messages.
Ensure all the nodes have the same spec, and the nodes have the same config. In
a virtual environment consider moving the
; Thanks,
> Dushyant
>
> From: aaron morton [mailto:aa...@thelastpickle.com]
> Sent: Monday, March 05, 2012 4:15 PM
> To: user@cassandra.apache.org
> Subject: Re: Mutation Dropped Messages
>
> 1. Which parameters to tune in the config files? – Especially looking
> f
orton [mailto:aa...@thelastpickle.com]
Sent: Monday, March 05, 2012 4:15 PM
To: user@cassandra.apache.org
Subject: Re: Mutation Dropped Messages
1. Which parameters to tune in the config files? - Especially looking for
heavy writes
The node is overloaded. It may be because there are no enough node
...@thelastpickle.com]
Sent: Monday, March 05, 2012 4:15 PM
To: user@cassandra.apache.org
Subject: Re: Mutation Dropped Messages
1. Which parameters to tune in the config files? - Especially looking for
heavy writes
The node is overloaded. It may be because there are no enough nodes, or the
node is
> 1. Which parameters to tune in the config files? – Especially looking
> for heavy writes
The node is overloaded. It may be because there are no enough nodes, or the
node is under temporary stress such as GC or repair.
If you have spare IO / CPU capacity you could increase the current_wri
16 matches
Mail list logo