[ 
https://issues.apache.org/jira/browse/CASSANDRA-13241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15887602#comment-15887602
 ] 

Benjamin Roth commented on CASSANDRA-13241:
-------------------------------------------

Just thinking about Jeffs + Bens comments:

Even if you have 4 TB of data and 32GB RAM 4KB might help. In that (extreme) 
case, you'd steal ~8GB from page cache for "chunk tables". These 8GB probably 
would have helped a fraction of nothing when used as page cache if you look at 
the RAM/Load ratio. Most probably the PC would be totally ineffective, if you 
don't have a very, very low percentage of hot data. So the probability that 
nearly every read results in a physical IO is very high.
So in that case lowering the chunk size to 4KB would at least save you from 
immense overread and help the SSDs to survive that situation.

That said, I see only one REAL problem:
If you have more chunk-offset data than fits in your memory. But in that case 
my answer would simply be: Get more RAM. There are certain mininum requirements 
you MUST fulfill. The imagination of running a node with many TBs of data with 
less than say 16-32GB is simply insane from all kinds of perspective.

Nevertheless optimizing the memory usage of chunk-offset lookup would be a big 
deal either.

> Lower default chunk_length_in_kb from 64kb to 4kb
> -------------------------------------------------
>
>                 Key: CASSANDRA-13241
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13241
>             Project: Cassandra
>          Issue Type: Wish
>          Components: Core
>            Reporter: Benjamin Roth
>
> Having a too low chunk size may result in some wasted disk space. A too high 
> chunk size may lead to massive overreads and may have a critical impact on 
> overall system performance.
> In my case, the default chunk size lead to peak read IOs of up to 1GB/s and 
> avg reads of 200MB/s. After lowering chunksize (of course aligned with read 
> ahead), the avg read IO went below 20 MB/s, rather 10-15MB/s.
> The risk of (physical) overreads is increasing with lower (page cache size) / 
> (total data size) ratio.
> High chunk sizes are mostly appropriate for bigger payloads pre request but 
> if the model consists rather of small rows or small resultsets, the read 
> overhead with 64kb chunk size is insanely high. This applies for example for 
> (small) skinny rows.
> Please also see here:
> https://groups.google.com/forum/#!topic/scylladb-dev/j_qXSP-6-gY
> To give you some insights what a difference it can make (460GB data, 128GB 
> RAM):
> - Latency of a quite large CF: https://cl.ly/1r3e0W0S393L
> - Disk throughput: https://cl.ly/2a0Z250S1M3c
> - This shows, that the request distribution remained the same, so no "dynamic 
> snitch magic": https://cl.ly/3E0t1T1z2c0J



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to