[ https://issues.apache.org/jira/browse/CASSANDRA-8177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183362#comment-14183362 ]
Janne Jalkanen commented on CASSANDRA-8177: ------------------------------------------- Kinda having the same problem - attaching the compactions graph from Munin. Started a serial compaction on 21st midnight, took something like 45 hrs. On 23rd and 24th we ran a repair with -par. It's a lot shorter and causes a lot less compaction traffic. This is with 2.0.10 from a production cluster with 9:1 read/write ratio, recently upgraded from 1.2.18. > sequential repair is much more expensive than parallel repair > ------------------------------------------------------------- > > Key: CASSANDRA-8177 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8177 > Project: Cassandra > Issue Type: Bug > Reporter: Sean Bridges > Assignee: Yuki Morishita > Attachments: cassc-week.png, iostats.png > > > This is with 2.0.10 > The attached graph shows io read/write throughput (as measured with iostat) > when doing repairs. > The large hump on the left is a sequential repair of one node. The two much > smaller peaks on the right are parallel repairs. > This is a 3 node cluster using vnodes (I know vnodes on small clusters isn't > recommended). Cassandra reports load of 40 gigs. > We noticed a similar problem with a larger cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332)