Avoid large array allocation for compressed chunk offsets
---------------------------------------------------------

                 Key: CASSANDRA-3432
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3432
             Project: Cassandra
          Issue Type: Improvement
          Components: Core
    Affects Versions: 1.0.0
            Reporter: Sylvain Lebresne
            Assignee: Sylvain Lebresne
            Priority: Minor
             Fix For: 1.0.2


For each compressed file we keep the chunk offsets in memory (a long[]). The 
size of this array is directly proportional to the sstable file and the 
chunk_length_kb used, but say for a 64GB sstable, we're talking ~8MB in memory 
by default.

Without being absolutely huge, this probably makes the life of the GC harder 
than necessary for the same reasons than CASSANDRA-2466, and this ticket 
proposes the same solution, i.e. to break down those big array into smaller 
ones to ease fragmentation.

Note that this is only a concern for size tiered compaction. But until leveled 
compaction is battle tested, the default and we know nobody uses size tiered 
anymore, it's probably worth making the optimization.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to