[ 
https://issues.apache.org/jira/browse/CASSANDRA-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13435532#comment-13435532
 ] 

Yuki Morishita commented on CASSANDRA-4292:
-------------------------------------------

I ran tests against patched and trunk with modified stress tool to write to 3 
CFs with leveled compaction.
Node consists of 6 spinning disks and C* uses those as data directories.
Although I see difference in disk usage(patched version distributes load evenly 
among disks), there is still no difference in performance in both write and 
compaction.
It seems that sometimes memtable flushing is blocked when long running 
compaction is already started, and causing GC pressure on patched node.
Looks like I need to find the way to avoid queuing up memtable flush tasks.
                
> Per-disk I/O queues
> -------------------
>
>                 Key: CASSANDRA-4292
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4292
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Yuki Morishita
>             Fix For: 1.2
>
>         Attachments: 4292.txt, 4292-v2.txt, 4292-v3.txt
>
>
> As noted in CASSANDRA-809, we have a certain amount of flush (and compaction) 
> threads, which mix and match disk volumes indiscriminately.  It may be worth 
> creating a tight thread -> disk affinity, to prevent unnecessary conflict at 
> that level.
> OTOH as SSDs become more prevalent this becomes a non-issue.  Unclear how 
> much pain this actually causes in practice in the meantime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to