Thank you all for the response. I figured out the root cause.
I thought all my data was in memtable only but the data was actually being
dumped to the disk. That's why I was noticing the drop in throughput.
On Wed, May 24, 2017 at 9:42 AM, daemeon reiydelle
wrote:
> You speak of increase. Please
You speak of increase. Please provide your results. Specific examples, Eg
25% increase results in n% increase. Also please include number of nodes,
size of total keyspace, rep factor, etc.
Hopefully this is a 6 node cluster with several hundred gig per keyspace,
not some single node free tier box.
Larger memtable means mor time during flushes and larger heap means longer GC
pauses. You can see these in system log
Sent from my iPhone
> On May 24, 2017, at 11:31 AM, preetika tyagi wrote:
>
> Hi,
>
> I'm experimenting with memtable/heap size on my Cassandra server to
> understand how it
Hi,
I'm experimenting with memtable/heap size on my Cassandra server to
understand how it impacts the latency/throughput for read requests.
I vary heap size (Xms and -Xmx) in jvm.options so memtable will be 1/4 of
this. When I increase the heap size and hence memtable, I notice the drop
in throug