[ https://issues.apache.org/jira/browse/CASSANDRA-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ryan McGuire updated CASSANDRA-5932: ------------------------------------ Reproduced In: 2.0 rc2, 2.0 rc1 (was: 2.0 rc1, 2.0 rc2) Description: I've done a series of stress tests with eager retries enabled that show undesirable behavior. I'm grouping these behaviours into one ticket as they are most likely related. 1) Killing off a node in a 4 node cluster actually increases performance. 2) Compactions make nodes slow, even after the compaction is done. 3) Eager Reads tend to lessen the *immediate* performance impact of a node going down, but not consistently. My Environment: 1 stress machine: node0 4 C* nodes: node4, node5, node6, node7 My script: node0 writes some data: stress -d node4 -F 30000000 -n 30000000 -i 5 -l 2 -K 20 node0 reads some data: stress -d node4 -n 30000000 -o read -i 5 -K 20 h3. Examples: h5. A node going down increases performance: !node-down-increase-performance.png! [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.json&metric=interval_op_rate&operation=stress-read&smoothing=1] h5. Compactions make nodes permanently slow: !compaction-makes-slow.png! !compaction-makes-slow-stats.png! The green and orange lines represent trials with eager retry enabled, they never recover their op-rate from before the compaction as the red and blue lines do. [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.compaction.2.json&metric=interval_op_rate&operation=stress-read&smoothing=1] h5. Speculative Read tends to lessen the *immediate* impact: !eager-read-looks-promising.png! !eager-read-looks-promising-stats.png! This graph looked the most promising to me, the two trials with eager retry, the green and orange line, at 450s showed the smallest dip in performance. [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.json&metric=interval_op_rate&operation=stress-read&smoothing=1] h5. But not always: !eager-read-not-consistent.png! !eager-read-not-consistent-stats.png! This is a retrial with the same settings as above, yet the 95percentile eager retry (red line) did poorly this time at 450s. [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.rc1.try2.json&metric=interval_op_rate&operation=stress-read&smoothing=1] was: I've done a series of stress tests with eager retries enabled that show undesirable behavior. I'm grouping these behaviours into one ticket as they are most likely related. 1) Killing off a node in a 4 node cluster actually increases performance. 2) Compactions make nodes slow, even after the compaction is done. 3) Eager Reads tend to lessen the *immediate* performance impact of a node going down, but not consistently. My Environment: 1 stress machine: node0 4 C* nodes: node4, node5, node6, node7 My script: node0 writes some data: stress -d node4 -F 30000000 -n 30000000 -i 5 -l 2 -K 20 node0 reads some data: stress -d node4 -n 30000000 -o read -i 5 -K 20 h3. Examples: h5. A node going down increases performance: !node-down-increase-performance.png! [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.json&metric=interval_op_rate&operation=stress-read&smoothing=1] h5. Compactions make nodes permanently slow: !compaction-makes-slow.png! The green and orange lines represent trials with eager retry enabled, they never recover their op-rate from before the compaction as the red and blue lines do. [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.compaction.2.json&metric=interval_op_rate&operation=stress-read&smoothing=1] h5. Speculative Read tends to lessen the *immediate* impact: !eager-read-looks-promising.png! This graph looked the most promising to me, the two trials with eager retry, the green and orange line, at 450s showed the smallest dip in performance. [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.json&metric=interval_op_rate&operation=stress-read&smoothing=1] h5. But not always: !eager-read-not-consistent.png! This is a retrial with the same settings as above, yet the 95percentile eager retry (red line) did poorly this time at 450s. [Data for this test here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.rc1.try2.json&metric=interval_op_rate&operation=stress-read&smoothing=1] > Speculative read performance data show unexpected results > --------------------------------------------------------- > > Key: CASSANDRA-5932 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5932 > Project: Cassandra > Issue Type: Bug > Reporter: Ryan McGuire > Attachments: compaction-makes-slow.png, > eager-read-looks-promising.png, eager-read-not-consistent.png, > node-down-increase-performance.png > > > I've done a series of stress tests with eager retries enabled that show > undesirable behavior. I'm grouping these behaviours into one ticket as they > are most likely related. > 1) Killing off a node in a 4 node cluster actually increases performance. > 2) Compactions make nodes slow, even after the compaction is done. > 3) Eager Reads tend to lessen the *immediate* performance impact of a node > going down, but not consistently. > My Environment: > 1 stress machine: node0 > 4 C* nodes: node4, node5, node6, node7 > My script: > node0 writes some data: stress -d node4 -F 30000000 -n 30000000 -i 5 -l 2 -K > 20 > node0 reads some data: stress -d node4 -n 30000000 -o read -i 5 -K 20 > h3. Examples: > h5. A node going down increases performance: > !node-down-increase-performance.png! > [Data for this test > here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.json&metric=interval_op_rate&operation=stress-read&smoothing=1] > h5. Compactions make nodes permanently slow: > !compaction-makes-slow.png! > !compaction-makes-slow-stats.png! > The green and orange lines represent trials with eager retry enabled, they > never recover their op-rate from before the compaction as the red and blue > lines do. > [Data for this test > here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.compaction.2.json&metric=interval_op_rate&operation=stress-read&smoothing=1] > h5. Speculative Read tends to lessen the *immediate* impact: > !eager-read-looks-promising.png! > !eager-read-looks-promising-stats.png! > This graph looked the most promising to me, the two trials with eager retry, > the green and orange line, at 450s showed the smallest dip in performance. > [Data for this test > here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.json&metric=interval_op_rate&operation=stress-read&smoothing=1] > h5. But not always: > !eager-read-not-consistent.png! > !eager-read-not-consistent-stats.png! > This is a retrial with the same settings as above, yet the 95percentile eager > retry (red line) did poorly this time at 450s. > [Data for this test > here|http://ryanmcguire.info/ds/graph/graph.html?stats=stats.eager_retry.node_killed.just_20.rc1.try2.json&metric=interval_op_rate&operation=stress-read&smoothing=1] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira