[ https://issues.apache.org/jira/browse/CASSANDRA-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Alan Boudreault updated CASSANDRA-8329: --------------------------------------- Attachment: test_with_patch_2.0.jpg test_no_patch_2.0.jpg Devs, here my test results. h4. Test * 12 disks of 2G of size * Cassandra use default values for concurrent_compactors and compaction_throughput_mb_per_sec. * Goal: Stress the server with many concurent writes for 45-50 minutes. h5. Results - No Patch We can see the peak the LCS compaction of big sstables. !test_no_patch_2.0.jpg|thumbnail! h5. Results - With Patch Success. There is no more peak during the compaction. !test_with_patch_2.0.jpg|thumbnail! Let me know if I can do anything else. > LeveledCompactionStrategy should split large files across data directories > when compacting > ------------------------------------------------------------------------------------------ > > Key: CASSANDRA-8329 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8329 > Project: Cassandra > Issue Type: Improvement > Components: Core > Reporter: J.B. Langston > Assignee: Marcus Eriksson > Fix For: 2.0.12 > > Attachments: > 0001-get-new-sstable-directory-for-every-new-file-during-.patch, > test_no_patch_2.0.jpg, test_with_patch_2.0.jpg > > > Because we fall back to STCS for L0 when LCS gets behind, the sstables in L0 > can get quite large during sustained periods of heavy writes. This can > result in large imbalances between data volumes when using JBOD support. > Eventually these large files get broken up as L0 sstables are moved up into > higher levels; however, because LCS only chooses a single volume on which to > write all of the sstables created during a single compaction, the imbalance > is persisted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)