On Sun, Jul 20, 2014 at 6:12 PM, Diane Griffith <dfgriff...@gmail.com>
wrote:

> I am running tests again across different number of client threads and
> number of nodes but this time I tweaked some of the timeouts configured for
> the nodes in the cluster.  I was able to get better performance on the
> nodes at 10 client threads by upping 4 timeout values in cassandra.yaml to
> 240000:
>

If you have to tune these timeout values, you have probably modeled data in
such a way that each of your requests is "quite large" or "quite slow".

This is usually, but not always, an indicator that you are Doing It Wrong.
Massively multithreaded things don't generally like their threads to be
long-lived, for what should hopefully be obvious reasons.


> I did this because of my interpretation of the cfhistograms output on one
> of the nodes.
>

Could you be more specific?


> So 3 questions that come to mind:
>
>
>    1. Did I interpret the histogram information correctly in cassandra
>    2.0.6 nodetool output?  That the 2 column read latency output is the offset
>    or left column is the time in milliseconds and the right column is number
>    of requests that fell into that bucket range.
>    2. Was it reasonable for me to boost those 4 timeouts and just those?
>
> Not really. In 5 years of operating Cassandra, I've never had a problem
whose solution was to increase these timeouts from their default.

>
>    1. What are reasonable timeout values for smaller vm sizes (i.e. 8GB
>    RAM, 4 CPUs)?
>
> As above, I question the premise of this question.

=Rob

Reply via email to