[ 
https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15260127#comment-15260127
 ] 

Branimir Lambov commented on CASSANDRA-5863:
--------------------------------------------

In the latest couple of updates I did some renaming:
- {{BufferlessRebufferer}} to {{ChunkReader}} with {{rebuffer}} to {{readChunk}}
- {{BaseRebufferer}} to {{ReaderFileProxy}}
- {{SharedRebufferer}} to {{RebuffererFactory}} with factory method
- {{ReaderCache}} to {{ChunkCache}}

and updated some of the documentation. Hopefully this reads better now?

Switched to Caffeine as planned in CASSANDRA-11452:
- [better cache 
efficiency|https://docs.google.com/spreadsheets/d/11VcYh8wiCbpVmeix10onalAS4phfREWcxE-RMPTM7cc/edit#gid=0]
 on CachingBench which includes compaction, scans and collation from multiple 
sstables
- [cstar_perf with everything served off 
cache|http://cstar.datastax.com/tests/id/b5963866-0b9a-11e6-a761-0256e416528f] 
shows equivalent performance, i.e. it does not degrade on heavy load
- [cstar_perf on smaller 
cache|http://cstar.datastax.com/tests/id/41b4c650-0c6d-11e6-bf41-0256e416528f] 
shows better hit rate even with uniformly random access patterns (48.8 vs 45.4% 
as reported by nodetool info)
- unlike LIRS, memory overheads are very controlled and specified 
[here|https://github.com/ben-manes/caffeine/wiki/Memory-overhead]: at most 112 
bytes per chunk including key, i.e. 0.2% for 64k chunks to 3% for 4k chunks.

And finally rebased to get dtest in sync:
|[code|https://github.com/blambov/cassandra/tree/5863-page-cache-caffeine-rebased]|[utest|http://cassci.datastax.com/job/blambov-5863-page-cache-caffeine-rebased-testall/]|[dtest|http://cassci.datastax.com/job/blambov-5863-page-cache-caffeine-rebased-dtest/]|

> In process (uncompressed) page cache
> ------------------------------------
>
>                 Key: CASSANDRA-5863
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5863
>             Project: Cassandra
>          Issue Type: Sub-task
>            Reporter: T Jake Luciani
>            Assignee: Branimir Lambov
>              Labels: performance
>             Fix For: 3.x
>
>
> Currently, for every read, the CRAR reads each compressed chunk into a 
> byte[], sends it to ICompressor, gets back another byte[] and verifies a 
> checksum.  
> This process is where the majority of time is spent in a read request.  
> Before compression, we would have zero-copy of data and could respond 
> directly from the page-cache.
> It would be useful to have some kind of Chunk cache that could speed up this 
> process for hot data, possibly off heap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to