[ 
https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13704045#comment-13704045
 ] 

Pavel Yaskevich edited comment on CASSANDRA-5661 at 7/10/13 12:54 AM:
----------------------------------------------------------------------

I have started working on integrating multiway pool and it looks like we have 
two problems:

#1. As each SegmentedFile has to return unique instance using 
"createReader(String)" LoadingCache won't do for us, as we need get(K, 
Callable) per SSTableReader.
#2. as MultiWay returns a handle I changed RAR to have a setHandle method 
instead of passing SegmentedFile into constructor, which seems a bit hacky to 
me as we need to be careful in maintaining that relationship...

I did some performance testing (with attached patch) where MultiwayPool 
allocated per instance because we can't specify loader in borrow(...) yet, 
which should be a best case for it, not in terms of memory usage but 
contention. I loaded 5,000,000 keys with following stress command 
(./tools/bin/cassandra-stress -n 5000000 -S 512 -C 20 -Z 
LeveledCompactionStrategy) for initial data and then I made it run in a loop 
and was doing reads in parallel.

With writes:

Average read performance for MultiwayPool: median 6.2, 95th 11.4, 99.9th 78.8 
Average read performance for FileCacheService: median: 5.3, 95th 9.6, 99.9th 
73.1

No writes, no compaction:

Average read performance for MultiwayPool: median 2.3, 95th 3.2, 99.9th 21.3 
Average read performance for FileCacheService: median: 1.7, 95th 2.9, 99.9th 
19.2

I tried doing range_slice but due to timeouts I couldn't really complete test 
on any of the implementations, median latenties on average different by 3-4 ms.

Edit: I forgot to mention that I hardcoded maxSize per MultiwayPool instance 
which was fine for that test, but we really need a way to weight items if we 
are going to use it globally.

                
      was (Author: xedin):
    I have started working on integrating multiway pool and it looks like we 
have two problems:

#1. As each SegmentedFile has to return unique instance using 
"createReader(String)" LoadingCache won't do for us, as we need get(K, 
Callable) per SSTableReader.
#2. as MultiWay returns a handle I changed RAR to have a setHandle method 
instead of passing SegmentedFile into constructor, which seems a bit hacky to 
me as we need to be careful in maintaining that relationship...

I did some performance testing (with attached patch) where MultiwayPool 
allocated per instance because we can't specify loader in borrow(...) yet, 
which should be a best case for it, not in terms of memory usage but 
contention. I loaded 5,000,000 keys with following stress command 
(./tools/bin/cassandra-stress -n 5000000 -S 512 -C 20 -Z 
LeveledCompactionStrategy) for initial data and then I made it run in a loop 
and was doing reads in parallel.

With writes:

Average read performance for MultiwayPool: median 6.2, 95th 11.4, 99.9th 78.8 
Average read performance for FileCacheService: median: 5.3, 95th 9.6, 99.9th 
73.1

No writes, no compaction:

Average read performance for MultiwayPool: median 2.3, 95th 3.2, 99.9th 21.3 
Average read performance for FileCacheService: median: 1.7, 95th 2.9, 99.9th 
19.2

I tried doing range_slice but due to timeouts I couldn't really complete test 
on any of the implementations, median latenties on average different by 3-4 ms.

                  
> Discard pooled readers for cold data
> ------------------------------------
>
>                 Key: CASSANDRA-5661
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.2.1
>            Reporter: Jonathan Ellis
>            Assignee: Pavel Yaskevich
>             Fix For: 2.0
>
>         Attachments: CASSANDRA-5661-multiway-per-sstable.patch, 
> CASSANDRA-5661.patch, DominatorTree.png, Histogram.png
>
>
> Reader pooling was introduced in CASSANDRA-4942 but pooled 
> RandomAccessReaders are never cleaned up until the SSTableReader is closed.  
> So memory use is "the worst case simultaneous RAR we had open for this file, 
> forever."
> We should introduce a global limit on how much memory to use for RAR, and 
> evict old ones.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to