[ 
https://issues.apache.org/jira/browse/CASSANDRA-6609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-6609:
--------------------------------

    Attachment: tmp3.patch

Uh, that is if it weren't for an awful fat finger error. Fixed, and also 
reintroduced a deoptimised public getHashBuckets for use by the unit tests 
(deoptimised because it permits far more hashes than we ever do for realz, so I 
skip the ThreadLocal so we don't have to allocate an array that large).

> Reduce Bloom Filter Garbage Allocation
> --------------------------------------
>
>                 Key: CASSANDRA-6609
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6609
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Benedict
>         Attachments: tmp.diff, tmp2.patch, tmp3.patch
>
>
> Just spotted that we allocate potentially large amounts of garbage on bloom 
> filter lookups, since we allocate a new long[] for each hash() and to store 
> the bucket indexes we visit, in a manner that guarantees they are allocated 
> on heap. With a lot of sstables and many requests, this could easily be 
> hundreds of megabytes of young gen churn per second.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to