[jira] [Comment Edited] (CASSANDRA-14106) utest failed: DistributionSequenceTest.setSeed() and simpleSequence()
[ https://issues.apache.org/jira/browse/CASSANDRA-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287226#comment-16287226 ] Jay Zhuang edited comment on CASSANDRA-14106 at 12/12/17 7:30 AM: -- Fixed the failed unittest and added an uTest for CASSANDRA-14090, please review: | Branch | uTest | | [14106|https://github.com/cooldoger/cassandra/tree/14106] | [!https://circleci.com/gh/cooldoger/cassandra/tree/14106.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/14106] | was (Author: jay.zhuang): Fixed the failed unittest and added an uTest for CASSANDRA-14090 | Branch | uTest | | [14106|https://github.com/cooldoger/cassandra/tree/14106] | [!https://circleci.com/gh/cooldoger/cassandra/tree/14106.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/14106] | > utest failed: DistributionSequenceTest.setSeed() and simpleSequence() > - > > Key: CASSANDRA-14106 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14106 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang > > To reproduce: > {noformat} > $ ant stress-test -Dtest.name=DistributionSequenceTest > {noformat} > {noformat} > stress-test: > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest Tests run: 4, > Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec > [junit] > [junit] Testcase: > simpleSequence(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.simpleSequence(DistributionSequenceTest.java:37) > [junit] > [junit] > [junit] Testcase: > setSeed(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.setSeed(DistributionSequenceTest.java:111) > [junit] > [junit] > [junit] Test > org.apache.cassandra.stress.generate.DistributionSequenceTest FAILED > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14106) utest failed: DistributionSequenceTest.setSeed() and simpleSequence()
[ https://issues.apache.org/jira/browse/CASSANDRA-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-14106: --- Status: Patch Available (was: In Progress) > utest failed: DistributionSequenceTest.setSeed() and simpleSequence() > - > > Key: CASSANDRA-14106 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14106 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang > > To reproduce: > {noformat} > $ ant stress-test -Dtest.name=DistributionSequenceTest > {noformat} > {noformat} > stress-test: > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest Tests run: 4, > Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec > [junit] > [junit] Testcase: > simpleSequence(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.simpleSequence(DistributionSequenceTest.java:37) > [junit] > [junit] > [junit] Testcase: > setSeed(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.setSeed(DistributionSequenceTest.java:111) > [junit] > [junit] > [junit] Test > org.apache.cassandra.stress.generate.DistributionSequenceTest FAILED > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14106) utest failed: DistributionSequenceTest.setSeed() and simpleSequence()
[ https://issues.apache.org/jira/browse/CASSANDRA-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287226#comment-16287226 ] Jay Zhuang edited comment on CASSANDRA-14106 at 12/12/17 7:27 AM: -- Fixed the failed unittest and added an uTest for CASSANDRA-14090 | Branch | uTest | | [14106|https://github.com/cooldoger/cassandra/tree/14106] | [!https://circleci.com/gh/cooldoger/cassandra/tree/14106.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/14106] | was (Author: jay.zhuang): Fixed unittest and added an uTest for CASSANDRA-14090 | Branch | uTest | | [14106|https://github.com/cooldoger/cassandra/tree/14106] | [!https://circleci.com/gh/cooldoger/cassandra/tree/14106.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/14106] | > utest failed: DistributionSequenceTest.setSeed() and simpleSequence() > - > > Key: CASSANDRA-14106 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14106 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang > > To reproduce: > {noformat} > $ ant stress-test -Dtest.name=DistributionSequenceTest > {noformat} > {noformat} > stress-test: > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest Tests run: 4, > Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec > [junit] > [junit] Testcase: > simpleSequence(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.simpleSequence(DistributionSequenceTest.java:37) > [junit] > [junit] > [junit] Testcase: > setSeed(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.setSeed(DistributionSequenceTest.java:111) > [junit] > [junit] > [junit] Test > org.apache.cassandra.stress.generate.DistributionSequenceTest FAILED > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14106) utest failed: DistributionSequenceTest.setSeed() and simpleSequence()
[ https://issues.apache.org/jira/browse/CASSANDRA-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287226#comment-16287226 ] Jay Zhuang commented on CASSANDRA-14106: Fixed unittest and added an uTest for CASSANDRA-14090 | Branch | uTest | | [14106|https://github.com/cooldoger/cassandra/tree/14106] | [!https://circleci.com/gh/cooldoger/cassandra/tree/14106.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/14106] | > utest failed: DistributionSequenceTest.setSeed() and simpleSequence() > - > > Key: CASSANDRA-14106 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14106 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang > > To reproduce: > {noformat} > $ ant stress-test -Dtest.name=DistributionSequenceTest > {noformat} > {noformat} > stress-test: > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest Tests run: 4, > Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec > [junit] > [junit] Testcase: > simpleSequence(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.simpleSequence(DistributionSequenceTest.java:37) > [junit] > [junit] > [junit] Testcase: > setSeed(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.setSeed(DistributionSequenceTest.java:111) > [junit] > [junit] > [junit] Test > org.apache.cassandra.stress.generate.DistributionSequenceTest FAILED > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-10742) Real world DateTieredCompaction tests
[ https://issues.apache.org/jira/browse/CASSANDRA-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson resolved CASSANDRA-10742. - Resolution: Won't Fix totally! > Real world DateTieredCompaction tests > - > > Key: CASSANDRA-10742 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10742 > Project: Cassandra > Issue Type: Test >Reporter: Marcus Eriksson > Labels: dtcs > > So, to be able to actually evaluate DTCS (or TWCS) we need stress profiles > that are similar to something that could be found in real production systems. > We should then run these profiles for _weeks_, and do regular operational > tasks on the cluster - like bootstrap, decom, repair etc. > [~jjirsa] [~jshook] (or anyone): could you describe any write/read patterns > you have seen people use with DTCS in production? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14106) utest failed: DistributionSequenceTest.setSeed() and simpleSequence()
[ https://issues.apache.org/jira/browse/CASSANDRA-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287164#comment-16287164 ] Jay Zhuang commented on CASSANDRA-14106: cassandra-stress is also failed with exception {{/ by zero}}, (added debug info): {noformat} $ tools/bin/cassandra-stress user profile=tools/cqlstress-example.yaml 'ops(insert=1)' n=10 cl=ONE no-warmup -rate threads=1 ... java.lang.ArithmeticException: / by zero at org.apache.cassandra.stress.generate.PartitionIterator$MultiRowIterator.decompose(PartitionIterator.java:410) at org.apache.cassandra.stress.generate.PartitionIterator$MultiRowIterator.setLastRow(PartitionIterator.java:347) at org.apache.cassandra.stress.generate.PartitionIterator$MultiRowIterator.reset(PartitionIterator.java:282) at org.apache.cassandra.stress.generate.PartitionIterator.reset(PartitionIterator.java:107) at org.apache.cassandra.stress.operations.PartitionOperation.reset(PartitionOperation.java:115) at org.apache.cassandra.stress.operations.PartitionOperation.ready(PartitionOperation.java:101) at org.apache.cassandra.stress.StressAction$StreamOfOperations.nextOp(StressAction.java:352) at org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:453) ... FAILURE java.lang.RuntimeException: Failed to execute stress action at org.apache.cassandra.stress.StressAction.run(StressAction.java:99) at org.apache.cassandra.stress.Stress.run(Stress.java:143) at org.apache.cassandra.stress.Stress.main(Stress.java:62) Process finished with exit code 1 {noformat} > utest failed: DistributionSequenceTest.setSeed() and simpleSequence() > - > > Key: CASSANDRA-14106 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14106 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang > > To reproduce: > {noformat} > $ ant stress-test -Dtest.name=DistributionSequenceTest > {noformat} > {noformat} > stress-test: > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest Tests run: 4, > Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec > [junit] > [junit] Testcase: > simpleSequence(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.simpleSequence(DistributionSequenceTest.java:37) > [junit] > [junit] > [junit] Testcase: > setSeed(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.setSeed(DistributionSequenceTest.java:111) > [junit] > [junit] > [junit] Test > org.apache.cassandra.stress.generate.DistributionSequenceTest FAILED > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10047) nodetool aborts when attempting to cleanup a keyspace with no ranges
[ https://issues.apache.org/jira/browse/CASSANDRA-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287156#comment-16287156 ] Jeff Jirsa commented on CASSANDRA-10047: I think this is the original of CASSANDRA-13526, but CASSANDRA-13526 had a patch, so I believe this is done. Closing, please re-open if I'm wrong. > nodetool aborts when attempting to cleanup a keyspace with no ranges > > > Key: CASSANDRA-10047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10047 > Project: Cassandra > Issue Type: Bug > Environment: 2.1.8 >Reporter: Russell Bradberry >Priority: Minor > > When running nodetool cleanup in a DC that has no ranges for a keyspace, > nodetool will abort with the following message when attempting to cleanup > that keyspace: > {code} > Aborted cleaning up atleast one column family in keyspace ks, check server > logs for more information. > error: nodetool failed, check server logs > -- StackTrace -- > java.lang.RuntimeException: nodetool failed, check server logs > at > org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290) > at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202) > {code} > The error messages in the logs are : > {code} > CompactionManager.java:370 - Cleanup cannot run before a node has joined the > ring > {code} > This behavior prevents subsequent keyspaces from getting cleaned up. The > error message is also misleading as it suggests that the only reason a node > may not have ranges for a keyspace is because it has yet to join the ring. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-10047) nodetool aborts when attempting to cleanup a keyspace with no ranges
[ https://issues.apache.org/jira/browse/CASSANDRA-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa resolved CASSANDRA-10047. Resolution: Duplicate Fix Version/s: (was: 2.1.x) > nodetool aborts when attempting to cleanup a keyspace with no ranges > > > Key: CASSANDRA-10047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10047 > Project: Cassandra > Issue Type: Bug > Environment: 2.1.8 >Reporter: Russell Bradberry >Priority: Minor > > When running nodetool cleanup in a DC that has no ranges for a keyspace, > nodetool will abort with the following message when attempting to cleanup > that keyspace: > {code} > Aborted cleaning up atleast one column family in keyspace ks, check server > logs for more information. > error: nodetool failed, check server logs > -- StackTrace -- > java.lang.RuntimeException: nodetool failed, check server logs > at > org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290) > at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202) > {code} > The error messages in the logs are : > {code} > CompactionManager.java:370 - Cleanup cannot run before a node has joined the > ring > {code} > This behavior prevents subsequent keyspaces from getting cleaned up. The > error message is also misleading as it suggests that the only reason a node > may not have ranges for a keyspace is because it has yet to join the ring. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10742) Real world DateTieredCompaction tests
[ https://issues.apache.org/jira/browse/CASSANDRA-10742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287150#comment-16287150 ] Jeff Jirsa commented on CASSANDRA-10742: [~krummas] can we close? > Real world DateTieredCompaction tests > - > > Key: CASSANDRA-10742 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10742 > Project: Cassandra > Issue Type: Test >Reporter: Marcus Eriksson > Labels: dtcs > > So, to be able to actually evaluate DTCS (or TWCS) we need stress profiles > that are similar to something that could be found in real production systems. > We should then run these profiles for _weeks_, and do regular operational > tasks on the cluster - like bootstrap, decom, repair etc. > [~jjirsa] [~jshook] (or anyone): could you describe any write/read patterns > you have seen people use with DTCS in production? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10759) nodetool upgradesstables does not always complete synchronously
[ https://issues.apache.org/jira/browse/CASSANDRA-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287149#comment-16287149 ] Jeff Jirsa commented on CASSANDRA-10759: Hi [~jmonserrate] - are you able to give some feedback to help further this jira? > nodetool upgradesstables does not always complete synchronously > --- > > Key: CASSANDRA-10759 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10759 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Discovered on Apache Cassandra 2.1.8.689 >Reporter: Jamie > > The "nodetool upgradesstables" command does not always complete > synchronously. We notice that the command exits with an exit code 0, however, > there are still files left behind on an older version that disappear later. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-8041) Utility sstablesplit should prevent users from running when C* is running
[ https://issues.apache.org/jira/browse/CASSANDRA-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-8041: -- Fix Version/s: (was: 2.1.x) 4.x > Utility sstablesplit should prevent users from running when C* is running > - > > Key: CASSANDRA-8041 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8041 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website, Tools >Reporter: Erick Ramirez >Priority: Minor > Fix For: 4.x > > > The sstablesplit utility is designed for use when C* is offline, but there is > nothing stopping the user from running it on a live system. There are also no > warning messages alerting the user to this effect. > The help information should also be updated to explicitly state that the > utility should only be used when C* is offline. > Finally, this utility is not included in any of the documentation. Please > update accordingly. Thanks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10165) Query fails when batch_size_warn_threshold_in_kb is not set on cassandra.yaml
[ https://issues.apache.org/jira/browse/CASSANDRA-10165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287147#comment-16287147 ] Jeff Jirsa commented on CASSANDRA-10165: {{batch_size_warn_threshold}} is an {{int}} in 3.0+, can we close? > Query fails when batch_size_warn_threshold_in_kb is not set on cassandra.yaml > - > > Key: CASSANDRA-10165 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10165 > Project: Cassandra > Issue Type: Bug > Environment: C* 2.1.5 >Reporter: Jose Martinez Poblete >Priority: Trivial > Labels: triaged > Fix For: 2.1.x > > > Jobs failed with the following error: > {noformat} > ERROR [SharedPool-Worker-1] 2015-08-21 18:06:42,759 ErrorMessage.java:244 - > Unexpected exception during request > java.lang.NullPointerException: null > at > org.apache.cassandra.config.DatabaseDescriptor.getBatchSizeWarnThreshold(DatabaseDescriptor.java:855) > ~[cassandra-all-2.1.5.469.jar:2.1.5.469] > at > org.apache.cassandra.cql3.statements.BatchStatement.verifyBatchSize(BatchStatement.java:239) > ~[cassandra-all-2.1.5.469.jar:2.1.5.469] > at > org.apache.cassandra.cql3.statements.BatchStatement.executeWithoutConditions(BatchStatement.java:311) > ~[cassandra-all-2.1.5.469.jar:2.1.5.469] > at > org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:296) > ~[cassandra-all-2.1.5.469.jar:2.1.5.469] > at > org.apache.cassandra.cql3.statements.BatchStatement.execute(BatchStatement.java:282) > ~[cassandra-all-2.1.5.469.jar:2.1.5.469] > at > org.apache.cassandra.cql3.QueryProcessor.processBatch(QueryProcessor.java:503) > ~[cassandra-all-2.1.5.469.jar:2.1.5.469] > at > com.datastax.bdp.cassandra.cql3.DseQueryHandler$BatchStatementExecution.execute(DseQueryHandler.java:327) > ~[dse.jar:4.7.0] > at > com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithTiming(DseQueryHandler.java:223) > ~[dse.jar:4.7.0] > at > com.datastax.bdp.cassandra.cql3.DseQueryHandler$Operation.executeWithAuditLogging(DseQueryHandler.java:259) > ~[dse.jar:4.7.0] > at > com.datastax.bdp.cassandra.cql3.DseQueryHandler.processBatch(DseQueryHandler.java:110) > ~[dse.jar:4.7.0] > at > org.apache.cassandra.transport.messages.BatchMessage.execute(BatchMessage.java:215) > ~[cassandra-all-2.1.5.469.jar:2.1.5.469] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439) > [cassandra-all-2.1.5.469.jar:2.1.5.469] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335) > [cassandra-all-2.1.5.469.jar:2.1.5.469] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > [na:1.7.0_75] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > [cassandra-all-2.1.5.469.jar:2.1.5.469] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [cassandra-all-2.1.5.469.jar:2.1.5.469] > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_75] > {noformat} > It turns there was no entry for *batch_size_warn_threshold_in_kb* on > cassandra.yaml > Once we set that parameter on the file, the error went away > Can we please have C* assume this setting assumes the default without > prejudice on the job if it's not specified on the yaml file? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-9332) NPE when creating column family via thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-9332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-9332: -- Fix Version/s: (was: 2.1.x) 3.0.x > NPE when creating column family via thrift > -- > > Key: CASSANDRA-9332 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9332 > Project: Cassandra > Issue Type: Bug > Environment: Oracle JDK 1.7.0_79 > Casandra 2.0.6 in single node > Ubuntu 14.04 >Reporter: Colin Kuo >Assignee: Ryan McGuire >Priority: Minor > Labels: proposed-cantrepro, thrift > Fix For: 3.0.x > > > When triggering unit test "testAddDropColumnFamily()" in > https://github.com/hector-client/hector/blob/master/core/src/test/java/me/prettyprint/cassandra/service/CassandraClusterTest.java > > It occurs NPE when using *Cassandra 2.0.6* or later version. > {noformat} > 11:42:39,173 [Thrift:1] ERROR CustomTThreadPoolServer:212 - Error occurred > during processing of message. > java.lang.NullPointerException > at org.apache.cassandra.db.RowMutation.add(RowMutation.java:112) > at > org.apache.cassandra.service.MigrationManager.addSerializedKeyspace(MigrationManager.java:265) > at > org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:213) > at > org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:1521) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4300) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4284) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > It seems that was introduced by fix of CASSANDRA-5631. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-9332) NPE when creating column family via thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-9332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-9332: -- Labels: proposed-cantrepro thrift (was: thrift) > NPE when creating column family via thrift > -- > > Key: CASSANDRA-9332 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9332 > Project: Cassandra > Issue Type: Bug > Environment: Oracle JDK 1.7.0_79 > Casandra 2.0.6 in single node > Ubuntu 14.04 >Reporter: Colin Kuo >Assignee: Ryan McGuire >Priority: Minor > Labels: proposed-cantrepro, thrift > Fix For: 3.0.x > > > When triggering unit test "testAddDropColumnFamily()" in > https://github.com/hector-client/hector/blob/master/core/src/test/java/me/prettyprint/cassandra/service/CassandraClusterTest.java > > It occurs NPE when using *Cassandra 2.0.6* or later version. > {noformat} > 11:42:39,173 [Thrift:1] ERROR CustomTThreadPoolServer:212 - Error occurred > during processing of message. > java.lang.NullPointerException > at org.apache.cassandra.db.RowMutation.add(RowMutation.java:112) > at > org.apache.cassandra.service.MigrationManager.addSerializedKeyspace(MigrationManager.java:265) > at > org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:213) > at > org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:1521) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4300) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4284) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > It seems that was introduced by fix of CASSANDRA-5631. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-9332) NPE when creating column family via thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-9332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287140#comment-16287140 ] Jeff Jirsa edited comment on CASSANDRA-9332 at 12/12/17 5:26 AM: - There's been no activity on this for 2 years, and the repro is on (EOL) 2.0. Does anyone believe this exists in 2.1 / 2.2 / 3.0? If not, I propose closing as cant-repro / wontfix. was (Author: jjirsa): There's been no activity on this for 2 years, and the repro is on (EOL) 2.0. Does anyone believe this exists in 2.1 / 2.2 / 3.0? If not, I propose closing as wontfix. > NPE when creating column family via thrift > -- > > Key: CASSANDRA-9332 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9332 > Project: Cassandra > Issue Type: Bug > Environment: Oracle JDK 1.7.0_79 > Casandra 2.0.6 in single node > Ubuntu 14.04 >Reporter: Colin Kuo >Assignee: Ryan McGuire >Priority: Minor > Labels: proposed-cantrepro, thrift > Fix For: 3.0.x > > > When triggering unit test "testAddDropColumnFamily()" in > https://github.com/hector-client/hector/blob/master/core/src/test/java/me/prettyprint/cassandra/service/CassandraClusterTest.java > > It occurs NPE when using *Cassandra 2.0.6* or later version. > {noformat} > 11:42:39,173 [Thrift:1] ERROR CustomTThreadPoolServer:212 - Error occurred > during processing of message. > java.lang.NullPointerException > at org.apache.cassandra.db.RowMutation.add(RowMutation.java:112) > at > org.apache.cassandra.service.MigrationManager.addSerializedKeyspace(MigrationManager.java:265) > at > org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:213) > at > org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:1521) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4300) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4284) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > It seems that was introduced by fix of CASSANDRA-5631. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10173) Compaction isn't cleaning out tombstones between hint deliveries
[ https://issues.apache.org/jira/browse/CASSANDRA-10173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-10173: --- Labels: proposed-wontfix (was: ) > Compaction isn't cleaning out tombstones between hint deliveries > > > Key: CASSANDRA-10173 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10173 > Project: Cassandra > Issue Type: Bug >Reporter: Jonathan Ellis > Labels: proposed-wontfix > Fix For: 2.2.x > > Attachments: system (3).log > > > 3 node cluster, 100M writes. Same scenario as 10172: > Test Start: 00:00:00 > Node 1 Killed: 00:05:48 > Node 2 Killed: 00:13:33 > Node 1 Started: 00:24:20 > Node 2 Started: 00:32:23 > Test Done: 00:38:33 > Node 1 hints replay finished: 00:56:16 > Node 2 hints replay finished: 01:00:16 > Node 3 hints replay finished: 02:08:00 > Log attached. Note the tombstone_failure_threshold errors. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10499) ConcurrentModificationException in Background read repair
[ https://issues.apache.org/jira/browse/CASSANDRA-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-10499: --- Labels: proposed-cantrepro (was: ) > ConcurrentModificationException in Background read repair > -- > > Key: CASSANDRA-10499 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10499 > Project: Cassandra > Issue Type: Bug >Reporter: sankalp kohli >Priority: Minor > Labels: proposed-cantrepro > > We are seeing the below exception in 2.0.14. While looking at the code, it > looks like it is happening due to ColumnFamily object being modified in SQF. > trim method. > Exception in thread Thread[ReadRepairStage:4441,5,main] > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.db.ColumnFamily.updateDigest(ColumnFamily.java:394) > at org.apache.cassandra.db.ColumnFamily.digest(ColumnFamily.java:388) > at > org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:84) > at > org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) > at > org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:173) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-9332) NPE when creating column family via thrift
[ https://issues.apache.org/jira/browse/CASSANDRA-9332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287140#comment-16287140 ] Jeff Jirsa commented on CASSANDRA-9332: --- There's been no activity on this for 2 years, and the repro is on (EOL) 2.0. Does anyone believe this exists in 2.1 / 2.2 / 3.0? If not, I propose closing as wontfix. > NPE when creating column family via thrift > -- > > Key: CASSANDRA-9332 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9332 > Project: Cassandra > Issue Type: Bug > Environment: Oracle JDK 1.7.0_79 > Casandra 2.0.6 in single node > Ubuntu 14.04 >Reporter: Colin Kuo >Assignee: Ryan McGuire >Priority: Minor > Labels: proposed-cantrepro, thrift > Fix For: 3.0.x > > > When triggering unit test "testAddDropColumnFamily()" in > https://github.com/hector-client/hector/blob/master/core/src/test/java/me/prettyprint/cassandra/service/CassandraClusterTest.java > > It occurs NPE when using *Cassandra 2.0.6* or later version. > {noformat} > 11:42:39,173 [Thrift:1] ERROR CustomTThreadPoolServer:212 - Error occurred > during processing of message. > java.lang.NullPointerException > at org.apache.cassandra.db.RowMutation.add(RowMutation.java:112) > at > org.apache.cassandra.service.MigrationManager.addSerializedKeyspace(MigrationManager.java:265) > at > org.apache.cassandra.service.MigrationManager.announceNewColumnFamily(MigrationManager.java:213) > at > org.apache.cassandra.thrift.CassandraServer.system_add_column_family(CassandraServer.java:1521) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4300) > at > org.apache.cassandra.thrift.Cassandra$Processor$system_add_column_family.getResult(Cassandra.java:4284) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:194) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > It seems that was introduced by fix of CASSANDRA-5631. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10173) Compaction isn't cleaning out tombstones between hint deliveries
[ https://issues.apache.org/jira/browse/CASSANDRA-10173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287137#comment-16287137 ] Jeff Jirsa commented on CASSANDRA-10173: You mention 3.0 was looking good, and this fixver is set to 2.2. Anyone believe this needs to be fixed still? Propose closing as wontfix. > Compaction isn't cleaning out tombstones between hint deliveries > > > Key: CASSANDRA-10173 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10173 > Project: Cassandra > Issue Type: Bug >Reporter: Jonathan Ellis > Labels: proposed-wontfix > Fix For: 2.2.x > > Attachments: system (3).log > > > 3 node cluster, 100M writes. Same scenario as 10172: > Test Start: 00:00:00 > Node 1 Killed: 00:05:48 > Node 2 Killed: 00:13:33 > Node 1 Started: 00:24:20 > Node 2 Started: 00:32:23 > Test Done: 00:38:33 > Node 1 hints replay finished: 00:56:16 > Node 2 hints replay finished: 01:00:16 > Node 3 hints replay finished: 02:08:00 > Log attached. Note the tombstone_failure_threshold errors. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10499) ConcurrentModificationException in Background read repair
[ https://issues.apache.org/jira/browse/CASSANDRA-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287134#comment-16287134 ] Jeff Jirsa commented on CASSANDRA-10499: This is 2+ year old 2.0 issue, any reason to believe it's still happening in 2.1/2.2/3.0? If not, I propose closing as can't-reproduce. > ConcurrentModificationException in Background read repair > -- > > Key: CASSANDRA-10499 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10499 > Project: Cassandra > Issue Type: Bug >Reporter: sankalp kohli >Priority: Minor > > We are seeing the below exception in 2.0.14. While looking at the code, it > looks like it is happening due to ColumnFamily object being modified in SQF. > trim method. > Exception in thread Thread[ReadRepairStage:4441,5,main] > java.util.ConcurrentModificationException > at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:859) > at java.util.ArrayList$Itr.next(ArrayList.java:831) > at > org.apache.cassandra.db.ColumnFamily.updateDigest(ColumnFamily.java:394) > at org.apache.cassandra.db.ColumnFamily.digest(ColumnFamily.java:388) > at > org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:84) > at > org.apache.cassandra.service.RowDigestResolver.resolve(RowDigestResolver.java:28) > at > org.apache.cassandra.service.ReadCallback$AsyncRepairRunner.run(ReadCallback.java:173) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-9452) Remove configuration of storage-conf from tools
[ https://issues.apache.org/jira/browse/CASSANDRA-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-9452: -- Fix Version/s: (was: 2.1.x) 4.x > Remove configuration of storage-conf from tools > --- > > Key: CASSANDRA-9452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9452 > Project: Cassandra > Issue Type: Task > Components: Configuration, Testing, Tools >Reporter: Mike Adamson >Priority: Minor > Labels: lhf > Fix For: 4.x > > > The following files still making reference to storage-config and/or > storage-conf.xml > * ./build.xml > * ./bin/nodetool > * ./bin/sstablekeys > * ./test/resources/functions/configure_cassandra.sh > * ./test/resources/functions/install_cassandra.sh > * ./tools/bin/json2sstable > * ./tools/bin/sstable2json > * ./tools/bin/sstablelevelreset -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-9452) Remove configuration of storage-conf from tools
[ https://issues.apache.org/jira/browse/CASSANDRA-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287124#comment-16287124 ] Jeff Jirsa commented on CASSANDRA-9452: --- Remarkably still true as we enter 2018, now only in {{build.xml}} and {{test/resources/functions/install_cassandra.sh}} . > Remove configuration of storage-conf from tools > --- > > Key: CASSANDRA-9452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9452 > Project: Cassandra > Issue Type: Task > Components: Configuration, Testing, Tools >Reporter: Mike Adamson >Priority: Minor > Labels: lhf > Fix For: 4.x > > > The following files still making reference to storage-config and/or > storage-conf.xml > * ./build.xml > * ./bin/nodetool > * ./bin/sstablekeys > * ./test/resources/functions/configure_cassandra.sh > * ./test/resources/functions/install_cassandra.sh > * ./tools/bin/json2sstable > * ./tools/bin/sstable2json > * ./tools/bin/sstablelevelreset -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-9452) Remove configuration of storage-conf from tools
[ https://issues.apache.org/jira/browse/CASSANDRA-9452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-9452: -- Labels: lhf (was: ) > Remove configuration of storage-conf from tools > --- > > Key: CASSANDRA-9452 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9452 > Project: Cassandra > Issue Type: Task > Components: Configuration, Testing, Tools >Reporter: Mike Adamson >Priority: Minor > Labels: lhf > Fix For: 4.x > > > The following files still making reference to storage-config and/or > storage-conf.xml > * ./build.xml > * ./bin/nodetool > * ./bin/sstablekeys > * ./test/resources/functions/configure_cassandra.sh > * ./test/resources/functions/install_cassandra.sh > * ./tools/bin/json2sstable > * ./tools/bin/sstable2json > * ./tools/bin/sstablelevelreset -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-9420) Table option for promising that you will never touch a column twice
[ https://issues.apache.org/jira/browse/CASSANDRA-9420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287120#comment-16287120 ] Jeff Jirsa commented on CASSANDRA-9420: --- Linking to 9779, the 'append only' table optimization ticket. > Table option for promising that you will never touch a column twice > --- > > Key: CASSANDRA-9420 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9420 > Project: Cassandra > Issue Type: New Feature >Reporter: Björn Hegerfors > > There are time series use cases where you write all values with various TTLs, > have GC grace = 0 and never ever update or delete a column after insertion. > In the case where all TTLs are the same, DTCS with recent patches works > great. But when there is lots of variations in TTLs, you are forced to choose > between splitting your table into multiple TTL tiers or having your SSTables > filled to the majority with tombstones. Or running frequent major compactions. > The problem stems from the fact that Cassandra plays safe when a TTL has > expired, and turns it into a tombstone, rather than getting rid of it on the > spot. The reason is that this TTL _may_ have been in a column which has had > an earlier write without (or with a higher) TTL. And then that one should now > be deleted too. > I propose that there should be table level setting to say "I guarantee that > there will never be any updates to any columns". The effect of enabling that > option is that all tombstones and expired TTLs should always be immediately > removed during compaction. And the check for dropping entirely expired > SSTables can be very loosened for these tables. > This option should probably require gc_grace_seconds to be set to zero. It's > also questionable if writes without TTL should be allowed to such a table, > since those would become constants. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-7057) Add ability to adjust number of vnodes on a node
[ https://issues.apache.org/jira/browse/CASSANDRA-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-7057: -- Labels: proposed-wontfix (was: ) > Add ability to adjust number of vnodes on a node > > > Key: CASSANDRA-7057 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7057 > Project: Cassandra > Issue Type: Improvement > Components: Configuration >Reporter: Michael Shuler >Priority: Minor > Labels: proposed-wontfix > > Currently, once a vnode server is configured with a num_tokens value, there > is no defined process to to change the number of vnodes a node is responsible > for. > This could be useful for load adjustments when upgrading hardware in a > server, or adjusting the number of vnodes to affect node data balance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-6878) Test different setra values in EC2 to find the best performance
[ https://issues.apache.org/jira/browse/CASSANDRA-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287094#comment-16287094 ] Jeff Jirsa commented on CASSANDRA-6878: --- This is going to end up being very data and hardware dependent - what do we do with the recommendation? We can't build it into a script, because we don't know what data looks like. Do you feel strongly that this needs to happen? It's been 3 years and no movement, I propose we won't-fix it unless you really believe it's useful. > Test different setra values in EC2 to find the best performance > --- > > Key: CASSANDRA-6878 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6878 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Joaquin Casares >Priority: Minor > > Tests should be run against: > * Ephemeral devices > * RAID0 ephemeral devices > * SSD devices > * RAID0 SSD devices > The current recommendation is: > {CODE} > sudo blockdev --setra 128 /dev/ > {CODE} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7057) Add ability to adjust number of vnodes on a node
[ https://issues.apache.org/jira/browse/CASSANDRA-7057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287096#comment-16287096 ] Jeff Jirsa commented on CASSANDRA-7057: --- {{nodetool taketoken}} is removed, anyone feel strongly that this needs to exist, or is it time to wontfix this? > Add ability to adjust number of vnodes on a node > > > Key: CASSANDRA-7057 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7057 > Project: Cassandra > Issue Type: Improvement > Components: Configuration >Reporter: Michael Shuler >Priority: Minor > Labels: proposed-wontfix > > Currently, once a vnode server is configured with a num_tokens value, there > is no defined process to to change the number of vnodes a node is responsible > for. > This could be useful for load adjustments when upgrading hardware in a > server, or adjusting the number of vnodes to affect node data balance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-6758) Measure data consistency in the cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-6758: -- Labels: proposed-wontfix (was: ) > Measure data consistency in the cluster > --- > > Key: CASSANDRA-6758 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6758 > Project: Cassandra > Issue Type: New Feature >Reporter: Jimmy MÃ¥rdell >Priority: Minor > Labels: proposed-wontfix > > Running multi-DC Cassandra can be a challenge as the cluster easily tends to > get out-of-sync. We have been thinking it would be nice to measure how out of > sync a cluster is and expose those metrics somehow. > One idea would be to just run the first half of the repair process and output > the result of the differencer. If you use Random or the Murmur3 partitioner, > it should be enough to calculate the merkle tree over a small subset of the > ring as the result can be extrapolated. > This could be exposed in nodetool. Either a separate command or perhaps a > dry-run flag to repair? > Not sure about the output format. I think it would be nice to have one value > ("% consistent"?) within a DC, and also one value for every pair of DC's > perhaps? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-5108) expose overall progress of cleanup tasks in jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-5108: -- Component/s: Compaction > expose overall progress of cleanup tasks in jmx > --- > > Key: CASSANDRA-5108 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5108 > Project: Cassandra > Issue Type: New Feature > Components: Compaction >Affects Versions: 1.2.0 >Reporter: Michael Kjellman >Priority: Minor > Labels: lhf > Fix For: 4.x > > > it would be nice if, upon starting a cleanup operation, cassandra could > maintain a Set (i assume this already exists as we have to know which file to > act on next) and a new set of "completed" sstables. When each is compacted > remove it from the pending list. That way C* could give an overall completion > of the long running and pending cleanup tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-5108) expose overall progress of cleanup tasks in jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-5108: -- Fix Version/s: 4.x > expose overall progress of cleanup tasks in jmx > --- > > Key: CASSANDRA-5108 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5108 > Project: Cassandra > Issue Type: New Feature > Components: Compaction >Affects Versions: 1.2.0 >Reporter: Michael Kjellman >Priority: Minor > Labels: lhf > Fix For: 4.x > > > it would be nice if, upon starting a cleanup operation, cassandra could > maintain a Set (i assume this already exists as we have to know which file to > act on next) and a new set of "completed" sstables. When each is compacted > remove it from the pending list. That way C* could give an overall completion > of the long running and pending cleanup tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-12749) Update a specific property of a UDT in list, from a table.
[ https://issues.apache.org/jira/browse/CASSANDRA-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa resolved CASSANDRA-12749. Resolution: Duplicate Closing as a dupe of CASSANDRA-7826 > Update a specific property of a UDT in list, from a table. > --- > > Key: CASSANDRA-12749 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12749 > Project: Cassandra > Issue Type: Wish > Components: CQL > Environment: Development >Reporter: Rajashekhar Sheela >Priority: Critical > Fix For: 3.0.6 > > > I have a table with set as following... > CREATE TABLE IF NOT EXISTS CustomTemplate ( > name text, > templateId uuid, > serviceId uuid, > tenants set, > templateXml text, > xpath text, > parameters list, > PRIMARY KEY (templateId) > ); > CREATE TYPE IF NOT EXISTS TemplateParameter ( > name text, > label text, > type text, > displayType text, > allowedValues list > ); > Sample Data: > cqlsh:skyfall_customtemplate> select * from customtemplate ; > templateid | name| parameters > | > serviceid| templatexml | tenants | xpath > --+-+--+--+-+-+ > afd01de6-bba9-4417-ab79-6851077f2f84 | testMyTemplate2 | [{name: 'X_PARAM', > label: null, type: 'String', displaytype: null, allowedvalues: null}] | > 82d565cb-d286-4523-a377-add72af9b23f | xml |null | /xpath > Requirement is: > > Update "displayType:" of the TemplateParameter whose name='X_PARAM' and > templateId=afd01de6-bba9-4417-ab79-6851077f2f84. > Not able to do this, please let know, how it can be done, if it is already > possible. > Thanks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-7377) Should be an option to fail startup if corrupt SSTable found
[ https://issues.apache.org/jira/browse/CASSANDRA-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Richard Low resolved CASSANDRA-7377. Resolution: Duplicate > Should be an option to fail startup if corrupt SSTable found > > > Key: CASSANDRA-7377 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7377 > Project: Cassandra > Issue Type: Improvement >Reporter: Richard Low > Labels: proposed-wontfix > > We had a server that crashed and when it came back, some SSTables were > corrupted. Cassandra happily started, but we then realised the corrupt > SSTable contained some tombstones and a few keys were resurrected. This means > corruption on a single replica can bring back data even if you run repairs at > least every gc_grace. > There should be an option, probably controlled by the disk failure policy, to > catch this and stop node startup. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-7377) Should be an option to fail startup if corrupt SSTable found
[ https://issues.apache.org/jira/browse/CASSANDRA-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287100#comment-16287100 ] Richard Low commented on CASSANDRA-7377: SGTM > Should be an option to fail startup if corrupt SSTable found > > > Key: CASSANDRA-7377 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7377 > Project: Cassandra > Issue Type: Improvement >Reporter: Richard Low > Labels: proposed-wontfix > > We had a server that crashed and when it came back, some SSTables were > corrupted. Cassandra happily started, but we then realised the corrupt > SSTable contained some tombstones and a few keys were resurrected. This means > corruption on a single replica can bring back data even if you run repairs at > least every gc_grace. > There should be an option, probably controlled by the disk failure policy, to > catch this and stop node startup. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10937) OOM on multiple nodes on write load (v. 3.0.0), problem also present on DSE-4.8.3, but there it survives more time
[ https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287077#comment-16287077 ] Jeff Jirsa commented on CASSANDRA-10937: There doesn't seem to be much here that points to a concrete Cassandra bug. Do you have any more info to reproduce? 3.0.0 was definitely an early release, but without a concrete bug identified, I propose we close this as unable to reproduce. > OOM on multiple nodes on write load (v. 3.0.0), problem also present on > DSE-4.8.3, but there it survives more time > -- > > Key: CASSANDRA-10937 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10937 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra : 3.0.0 > Installed as open archive, no connection to any OS specific installer. > Java: > Java(TM) SE Runtime Environment (build 1.8.0_65-b17) > OS : > Linux version 2.6.32-431.el6.x86_64 > (mockbu...@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red > Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013 > We have: > 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each > physical host keeps 4 guests. > Physical host parameters(shared by all 4 guests): > Model: HP ProLiant DL380 Gen9 > Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz > 46 logical processors. > Hyperthreading - enabled > Each guest assigned to have: > 1 disk 300 Gb for seq. log (NOT SSD) > 1 disk 4T for data (NOT SSD) > 11 CPU cores > Disks are local, not shared. > Memory on each host - 24 Gb total. > 8 (or 6, tested both) Gb - cassandra heap > (lshw and cpuinfo attached in file test2.rar) >Reporter: Peter Kovgan >Priority: Critical > Labels: proposed-wontfix > Attachments: cassandra-to-jack-krupansky.docx, gc-stat.txt, > more-logs.rar, some-heap-stats.rar, test2.rar, test3.rar, test4.rar, > test5.rar, test_2.1.rar, test_2.1_logs_older.rar, > test_2.1_restart_attempt_log.rar > > > 8 cassandra nodes. > Load test started with 4 clients(different and not equal machines), each > running 1000 threads. > Each thread assigned in round-robin way to run one of 4 different inserts. > Consistency->ONE. > I attach the full CQL schema of tables and the query of insert. > Replication factor - 2: > create keyspace OBLREPOSITORY_NY with replication = > {'class':'NetworkTopologyStrategy','NY':2}; > Initiall throughput is: > 215.000 inserts /sec > or > 54Mb/sec, considering single insert size a bit larger than 256byte. > Data: > all fields(5-6) are short strings, except one is BLOB of 256 bytes. > After about a 2-3 hours of work, I was forced to increase timeout from 2000 > to 5000ms, for some requests failed for short timeout. > Later on(after aprox. 12 hous of work) OOM happens on multiple nodes. > (all failed nodes logs attached) > I attach also java load client and instructions how set-up and use > it.(test2.rar) > Update: > Later on test repeated with lesser load (10 mes/sec) with more relaxed > CPU (idle 25%), with only 2 test clients, but anyway test failed. > Update: > DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours, > not 10-12. > Attachments: > test2.rar -contains most of material > more-logs.rar - contains additional nodes logs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-12857) Upgrade procedure between 2.1.x and 3.0.x is broken
[ https://issues.apache.org/jira/browse/CASSANDRA-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12857: --- Labels: proposed-wontfix (was: ) > Upgrade procedure between 2.1.x and 3.0.x is broken > --- > > Key: CASSANDRA-12857 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12857 > Project: Cassandra > Issue Type: Bug >Reporter: Alexander Yasnogor >Priority: Critical > Labels: proposed-wontfix > Attachments: cassandra.schema > > > It is not possible safely to do Cassandra in place upgrade from 2.1.14 to > 3.0.9. > Distribution: deb packages from datastax community repo. > The upgrade was performed according to procedure from this docu: > https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgrdCassandraDetails.html > Potential reason: The upgrade procedure creates corrupted system_schema and > this keyspace get populated in the cluster and kills it. > We started with one datacenter which contains 19 nodes divided to two racks. > First rack was successfully upgraded and nodetool describecluster reported > two schema versions. One for upgraded nodes, another for non-upgraded nodes. > On starting new version on a first node from the second rack: > {code:java} > INFO [main] 2016-10-25 13:06:12,103 LegacySchemaMigrator.java:87 - Moving 11 > keyspaces from legacy schema tables to the new schema keyspace (system_schema) > INFO [main] 2016-10-25 13:06:12,104 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7505e6ac > INFO [main] 2016-10-25 13:06:12,200 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@64414574 > INFO [main] 2016-10-25 13:06:12,204 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@3f2c5f45 > INFO [main] 2016-10-25 13:06:12,207 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2bc2d64d > INFO [main] 2016-10-25 13:06:12,301 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@77343846 > INFO [main] 2016-10-25 13:06:12,305 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@19b0b931 > INFO [main] 2016-10-25 13:06:12,308 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@44bb0b35 > INFO [main] 2016-10-25 13:06:12,311 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@79f6cd51 > INFO [main] 2016-10-25 13:06:12,319 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2fcd363b > INFO [main] 2016-10-25 13:06:12,356 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@609eead6 > INFO [main] 2016-10-25 13:06:12,358 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7eb7f5d0 > INFO [main] 2016-10-25 13:06:13,958 LegacySchemaMigrator.java:97 - > Truncating legacy schema tables > INFO [main] 2016-10-25 13:06:26,474 LegacySchemaMigrator.java:103 - > Completed migration of legacy schema tables > INFO [main] 2016-10-25 13:06:26,474 StorageService.java:521 - Populating > token metadata from system tables > INFO [main] 2016-10-25 13:06:26,796 StorageService.java:528 - Token > metadata: Normal Tokens: [HUGE LIST of tokens] > INFO [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - > Initializing ... > INFO [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - > Initializing ... > INFO [main] 2016-10-25 13:06:45,894 AutoSavingCache.java:165 - Completed > loading (2 ms; 460 keys) KeyCache cache > INFO [main] 2016-10-25 13:06:46,982 StorageService.java:521 - Populating > token metadata from system tables > INFO [main] 2016-10-25 13:06:47,394 StorageService.java:528 - Token > metadata: Normal Tokens:[HUGE LIST of tokens] > INFO [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:88 - Migrating > legacy hints to new storage > INFO [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:91 - Forcing a > major compaction of system.hints table > INFO [main] 2016-10-25 13:06:50,587 LegacyHintsMigrator.java:95 - Writing > legacy hints to the new storage > INFO [main] 2016-10-25 13:06:53,927 LegacyHintsMigrator.java:99 - Truncating > system.hints table > > INFO [main] 2016-10-25 13:06:56,572 MigrationManager.java:342 - Create new > table: >
[jira] [Updated] (CASSANDRA-12978) mx4j -> HTTP 500 -> ConcurrentModificationException
[ https://issues.apache.org/jira/browse/CASSANDRA-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12978: --- Labels: proposed-wontfix (was: ) > mx4j -> HTTP 500 -> ConcurrentModificationException > --- > > Key: CASSANDRA-12978 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12978 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Debian, Single cluster, 2 data centres, E5-2620 v3, > 16GB, RAID1 SSD Commit log, RAID10 15k HDD data >Reporter: Rob Emery >Priority: Critical > Labels: proposed-wontfix > Fix For: 2.1.6 > > > We run some checks from our Monitoring software that rely on mx4j. > The checks typically grab some xml via HTTP request and parse it. For > example, CF Stats on 'MyKeySpace' and 'MyColumnFamily' are retrieved > using: > http://cassandra001:8081/mbean?template=identity=org.apache.cassandra.db%3Atype%3DColumnFamilies%2Ckeyspace%3DMyKeySpace%2Ccolumnfamily%3DMyColumnFamily > The checks run each minute. Periodically they result in a "HTTP 500 internal > server error". The HTML body returned is empty. > Experimentally we ran Cassandra in the foreground on one node and reproduced > the problem. this elicited the following stack trace: > javax.management.RuntimeMBeanException: > java.util.ConcurrentModificationException > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) > at > mx4j.tools.adaptor.http.MBeanCommandProcessor.createMBeanElement(MBeanCommandProcessor.java:119) > at > mx4j.tools.adaptor.http.MBeanCommandProcessor.executeRequest(MBeanCommandProcessor.java:56) > at > mx4j.tools.adaptor.http.HttpAdaptor$HttpClient.run(HttpAdaptor.java:980) > Caused by: java.util.ConcurrentModificationException > at > java.util.TreeMap$NavigableSubMap$SubMapIterator.nextEntry(TreeMap.java:1594) > at > java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1642) > at > java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1636) > at java.util.AbstractMap$2$1.next(AbstractMap.java:385) > at > org.apache.cassandra.utils.StreamingHistogram.sum(StreamingHistogram.java:160) > at > org.apache.cassandra.io.sstable.metadata.StatsMetadata.getDroppableTombstonesBefore(StatsMetadata.java:113) > at > org.apache.cassandra.io.sstable.SSTableReader.getDroppableTombstonesBefore(SSTableReader.java:2004) > at > org.apache.cassandra.db.DataTracker.getDroppableTombstoneRatio(DataTracker.java:507) > at > org.apache.cassandra.db.ColumnFamilyStore.getDroppableTombstoneRatio(ColumnFamilyStore.java:3089) > at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) > at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) > at > com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) > at > com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83) > at > com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647) > ... 4 more -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13093) 2.2.8 Node goes down with MUTATION messages were dropped in last 5000 ms: 29 for internal timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287075#comment-16287075 ] Jeff Jirsa commented on CASSANDRA-13093: There doesn't seem to be much here that points to a concrete Cassandra bug. Do you have any more info to reproduce? 2.2 is in maintenance mode at this point, so without a concrete bug identified, I propose we close this as unable to reproduce. > 2.2.8 Node goes down with MUTATION messages were dropped in last 5000 ms: 29 > for internal timeout > - > > Key: CASSANDRA-13093 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13093 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.2.8 Cassandra on 4 Nodes with Red Hat Linux 6.2 64 Bit >Reporter: sutanu das >Priority: Critical > Labels: proposed-wontfix > > Issue: 1st Node of 4 Node in Cluster keeps aborting (jvm crashing) with > following messages: > - ReadTimeoutException: Operation timed out - received only 0 responses > - MUTATION messages were dropped in last 5000 ms: 29 for internal timeout and > 0 for cross node timeout > - Spark Jobs getting Q'd up when opening Channels, followed up Read Time Outs: > ERROR [SharedPool-Worker-207] 2017-01-03 16:39:00,493 Message.java:611 > - Unexpected exception during request; channel = [id: 0xd0b0d36d, > /216.12.229.180:41896 :> /172.17.30.47:9042] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - > received only 0 responses. > > What has been done so far? > - Host Reboot node 01 > - Mutiple C* restarts > - Increased read_request_timeout_in_ms from 1 to 5 > - Increased request_timeout_in_ms from 1 to 5 > - Changed following: > concurrent_reads: 128 > concurrent_writes: 128 > concurrent_counter_writes: 128 > - Upgrade to 2.2.8 - All Nodes Sync with 2.2.8 > - All nodes have same Pass Auth Scheme (Node 03 was a mis-match and was > fixed) > - authenticator: org.apache.cassandra.auth.PasswordAuthenticator > - authorizer: org.apache.cassandra.auth.CassandraAuthorizer > Full exception stack: > DEBUG [SharedPool-Worker-10] 2017-01-03 16:32:43,983 StorageProxy.java:1898 - > Range slice timeout; received 0 of 1 responses for range 1 of 1 > INFO [Service Thread] 2017-01-03 16:32:43,983 GCInspector.java:284 - ParNew > GC in 247ms. CMS Old Gen: 3768220776 -> 3996971216; Par Eden Space: > 1718091776 -> 0; > INFO [Service Thread] 2017-01-03 16:32:43,983 StatusLogger.java:52 - Pool > NameActive Pending Completed Blocked All Time > Blocked > DEBUG [SharedPool-Worker-26] 2017-01-03 16:32:43,984 > FileCacheService.java:102 - Evicting cold readers for > /cassandra/data/system_auth/roles-5bc52802de2535edaeab188eecebb090/la-51-big-Data.db > DEBUG [SharedPool-Worker-28] 2017-01-03 16:32:43,986 > AbstractQueryPager.java:89 - Got empty set of rows, considering pager > exhausted > INFO [ScheduledTasks:1] 2017-01-03 16:39:00,473 MessagingService.java:946 - > RANGE_SLICE messages were dropped in last 5000 ms: 2 for internal timeout and > 0 for cross node timeout > INFO [Service Thread] 2017-01-03 16:39:00,476 StatusLogger.java:106 - > sales.airwave_dwell_time_det_hr 0,0 > ERROR [SharedPool-Worker-207] 2017-01-03 16:39:00,493 Message.java:611 - > Unexpected exception during request; channel = [id: 0xd0b0d36d, > /216.12.229.180:41896 :> /172.17.30.47:9042] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - > received only 0 responses. > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:497) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:306) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.service.ClientState.login(ClientState.java:269) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-2.2.8.jar:2.2.8] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) >
[jira] [Commented] (CASSANDRA-7377) Should be an option to fail startup if corrupt SSTable found
[ https://issues.apache.org/jira/browse/CASSANDRA-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287097#comment-16287097 ] Jeff Jirsa commented on CASSANDRA-7377: --- [~rlow] - you ok with calling this a dupe of CASSANDRA-13620 and closing? > Should be an option to fail startup if corrupt SSTable found > > > Key: CASSANDRA-7377 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7377 > Project: Cassandra > Issue Type: Improvement >Reporter: Richard Low > > We had a server that crashed and when it came back, some SSTables were > corrupted. Cassandra happily started, but we then realised the corrupt > SSTable contained some tombstones and a few keys were resurrected. This means > corruption on a single replica can bring back data even if you run repairs at > least every gc_grace. > There should be an option, probably controlled by the disk failure policy, to > catch this and stop node startup. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-7377) Should be an option to fail startup if corrupt SSTable found
[ https://issues.apache.org/jira/browse/CASSANDRA-7377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-7377: -- Labels: proposed-wontfix (was: ) > Should be an option to fail startup if corrupt SSTable found > > > Key: CASSANDRA-7377 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7377 > Project: Cassandra > Issue Type: Improvement >Reporter: Richard Low > Labels: proposed-wontfix > > We had a server that crashed and when it came back, some SSTables were > corrupted. Cassandra happily started, but we then realised the corrupt > SSTable contained some tombstones and a few keys were resurrected. This means > corruption on a single replica can bring back data even if you run repairs at > least every gc_grace. > There should be an option, probably controlled by the disk failure policy, to > catch this and stop node startup. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-12418) sstabledump JSON fails after row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mck updated CASSANDRA-12418: Fix Version/s: (was: 3.0.x) 3.0.9 3.11.1 > sstabledump JSON fails after row tombstone > -- > > Key: CASSANDRA-12418 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12418 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Keith Wansbrough >Assignee: Dave Brosius > Fix For: 3.0.9, 3.11.1 > > > sstabledump fails in JSON generation on an sstable containing a row deletion, > using Cassandra 3.10-SNAPSHOT accf7a4724e244d6f1ba921cb11d2554dbb54a76 from > 2016-07-26. > There are two exceptions displayed: > * Fatal error parsing partition: aye > org.codehaus.jackson.JsonGenerationException: Can not start an object, > expecting field name > * org.codehaus.jackson.JsonGenerationException: Current context not an ARRAY > but OBJECT > Steps to reproduce: > {code} > cqlsh> create KEYSPACE foo WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > cqlsh> create TABLE foo.bar (id text, str text, primary key (id)); > cqlsh> insert into foo.bar (id, str) values ('aye', 'alpha'); > cqlsh> insert into foo.bar (id, str) values ('bee', 'beta'); > cqlsh> delete from foo.bar where id = 'bee'; > cqlsh> insert into foo.bar (id, str) values ('bee', 'beth'); > cqlsh> select * from foo.bar; > id | str > -+--- > bee | beth > aye | alpha > (2 rows) > cqlsh> > {code} > Now find the sstable: > {code} > $ cassandra/bin/nodetool flush > $ cassandra/bin/sstableutil foo bar > [..] > Listing files... > [..] > /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db > [..] > {code} > Now check with sstabledump \-d. This works just fine. > {code} > $ cassandra/tools/bin/sstabledump -d > /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db > [bee]@0 deletedAt=1470737827008101, localDeletion=1470737827 > [bee]@0 Row[info=[ts=1470737832405510] ]: | [str=beth ts=1470737832405510] > [aye]@31 Row[info=[ts=1470737784401778] ]: | [str=alpha ts=1470737784401778] > {code} > Now run sstabledump. This should work as well, but it fails as follows: > {code} > $ cassandra/tools/bin/sstabledump > /home/kw217/cassandra/data/data/foo/bar-407c56f05e1a11e6835def64bf5c656e/mb-1-big-Data.db > ERROR 10:26:07 Fatal error parsing partition: aye > org.codehaus.jackson.JsonGenerationException: Can not start an object, > expecting field name > at > org.codehaus.jackson.impl.JsonGeneratorBase._reportError(JsonGeneratorBase.java:480) > ~[jackson-core-asl-1.9.2.jar:1.9.2] > at > org.codehaus.jackson.impl.WriterBasedGenerator._verifyValueWrite(WriterBasedGenerator.java:836) > ~[jackson-core-asl-1.9.2.jar:1.9.2] > at > org.codehaus.jackson.impl.WriterBasedGenerator.writeStartObject(WriterBasedGenerator.java:273) > ~[jackson-core-asl-1.9.2.jar:1.9.2] > at > org.apache.cassandra.tools.JsonTransformer.serializePartition(JsonTransformer.java:181) > ~[main/:na] > at > java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) > ~[na:1.8.0_77] > at > java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175) > ~[na:1.8.0_77] > at java.util.Iterator.forEachRemaining(Iterator.java:116) ~[na:1.8.0_77] > at > java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801) > ~[na:1.8.0_77] > at > java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > ~[na:1.8.0_77] > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > ~[na:1.8.0_77] > at > java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) > ~[na:1.8.0_77] > at > java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) > ~[na:1.8.0_77] > at > java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > ~[na:1.8.0_77] > at > java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) > ~[na:1.8.0_77] > at > org.apache.cassandra.tools.JsonTransformer.toJson(JsonTransformer.java:99) > ~[main/:na] > at > org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:237) > ~[main/:na] > [ > { > "partition" : { > "key" : [ "bee" ], > "position" : 0, > "deletion_info" : { "marked_deleted" : "2016-08-09T10:17:07.008101Z", > "local_delete_time" : "2016-08-09T10:17:07Z" } > } > } > ]org.codehaus.jackson.JsonGenerationException: Current context not an ARRAY > but OBJECT > at > org.codehaus.jackson.impl.JsonGeneratorBase._reportError(JsonGeneratorBase.java:480) > at >
[jira] [Updated] (CASSANDRA-6878) Test different setra values in EC2 to find the best performance
[ https://issues.apache.org/jira/browse/CASSANDRA-6878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-6878: -- Labels: proposed-wontfix (was: ) > Test different setra values in EC2 to find the best performance > --- > > Key: CASSANDRA-6878 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6878 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Joaquin Casares >Priority: Minor > Labels: proposed-wontfix > > Tests should be run against: > * Ephemeral devices > * RAID0 ephemeral devices > * SSD devices > * RAID0 SSD devices > The current recommendation is: > {CODE} > sudo blockdev --setra 128 /dev/ > {CODE} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-6758) Measure data consistency in the cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-6758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287089#comment-16287089 ] Jeff Jirsa commented on CASSANDRA-6758: --- Think this is done. In 4.0, we have CASSANDRA-11503 (nodetool repaired/unrepaired by sstables), CASSANDRA-13774 (repaired/unrepaired by bytes), CASSANDRA-13289 (track an ideal consistency level beyond what acks the write), and CASSANDRA-13257 (repair preview). Seems like that covers the intent of this ticket. Propose we close as wontfix, because it's basically done by those others. > Measure data consistency in the cluster > --- > > Key: CASSANDRA-6758 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6758 > Project: Cassandra > Issue Type: New Feature >Reporter: Jimmy MÃ¥rdell >Priority: Minor > Labels: proposed-wontfix > > Running multi-DC Cassandra can be a challenge as the cluster easily tends to > get out-of-sync. We have been thinking it would be nice to measure how out of > sync a cluster is and expose those metrics somehow. > One idea would be to just run the first half of the repair process and output > the result of the differencer. If you use Random or the Murmur3 partitioner, > it should be enough to calculate the merkle tree over a small subset of the > ring as the result can be extrapolated. > This could be exposed in nodetool. Either a separate command or perhaps a > dry-run flag to repair? > Not sure about the output format. I think it would be nice to have one value > ("% consistent"?) within a DC, and also one value for every pair of DC's > perhaps? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-5108) expose overall progress of cleanup tasks in jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-5108: -- Labels: lhf (was: ) > expose overall progress of cleanup tasks in jmx > --- > > Key: CASSANDRA-5108 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5108 > Project: Cassandra > Issue Type: New Feature > Components: Compaction >Affects Versions: 1.2.0 >Reporter: Michael Kjellman >Priority: Minor > Labels: lhf > Fix For: 4.x > > > it would be nice if, upon starting a cleanup operation, cassandra could > maintain a Set (i assume this already exists as we have to know which file to > act on next) and a new set of "completed" sstables. When each is compacted > remove it from the pending list. That way C* could give an overall completion > of the long running and pending cleanup tasks. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13093) 2.2.8 Node goes down with MUTATION messages were dropped in last 5000 ms: 29 for internal timeout
[ https://issues.apache.org/jira/browse/CASSANDRA-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-13093: --- Labels: proposed-wontfix (was: ) > 2.2.8 Node goes down with MUTATION messages were dropped in last 5000 ms: 29 > for internal timeout > - > > Key: CASSANDRA-13093 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13093 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: 2.2.8 Cassandra on 4 Nodes with Red Hat Linux 6.2 64 Bit >Reporter: sutanu das >Priority: Critical > Labels: proposed-wontfix > > Issue: 1st Node of 4 Node in Cluster keeps aborting (jvm crashing) with > following messages: > - ReadTimeoutException: Operation timed out - received only 0 responses > - MUTATION messages were dropped in last 5000 ms: 29 for internal timeout and > 0 for cross node timeout > - Spark Jobs getting Q'd up when opening Channels, followed up Read Time Outs: > ERROR [SharedPool-Worker-207] 2017-01-03 16:39:00,493 Message.java:611 > - Unexpected exception during request; channel = [id: 0xd0b0d36d, > /216.12.229.180:41896 :> /172.17.30.47:9042] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - > received only 0 responses. > > What has been done so far? > - Host Reboot node 01 > - Mutiple C* restarts > - Increased read_request_timeout_in_ms from 1 to 5 > - Increased request_timeout_in_ms from 1 to 5 > - Changed following: > concurrent_reads: 128 > concurrent_writes: 128 > concurrent_counter_writes: 128 > - Upgrade to 2.2.8 - All Nodes Sync with 2.2.8 > - All nodes have same Pass Auth Scheme (Node 03 was a mis-match and was > fixed) > - authenticator: org.apache.cassandra.auth.PasswordAuthenticator > - authorizer: org.apache.cassandra.auth.CassandraAuthorizer > Full exception stack: > DEBUG [SharedPool-Worker-10] 2017-01-03 16:32:43,983 StorageProxy.java:1898 - > Range slice timeout; received 0 of 1 responses for range 1 of 1 > INFO [Service Thread] 2017-01-03 16:32:43,983 GCInspector.java:284 - ParNew > GC in 247ms. CMS Old Gen: 3768220776 -> 3996971216; Par Eden Space: > 1718091776 -> 0; > INFO [Service Thread] 2017-01-03 16:32:43,983 StatusLogger.java:52 - Pool > NameActive Pending Completed Blocked All Time > Blocked > DEBUG [SharedPool-Worker-26] 2017-01-03 16:32:43,984 > FileCacheService.java:102 - Evicting cold readers for > /cassandra/data/system_auth/roles-5bc52802de2535edaeab188eecebb090/la-51-big-Data.db > DEBUG [SharedPool-Worker-28] 2017-01-03 16:32:43,986 > AbstractQueryPager.java:89 - Got empty set of rows, considering pager > exhausted > INFO [ScheduledTasks:1] 2017-01-03 16:39:00,473 MessagingService.java:946 - > RANGE_SLICE messages were dropped in last 5000 ms: 2 for internal timeout and > 0 for cross node timeout > INFO [Service Thread] 2017-01-03 16:39:00,476 StatusLogger.java:106 - > sales.airwave_dwell_time_det_hr 0,0 > ERROR [SharedPool-Worker-207] 2017-01-03 16:39:00,493 Message.java:611 - > Unexpected exception during request; channel = [id: 0xd0b0d36d, > /216.12.229.180:41896 :> /172.17.30.47:9042] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - > received only 0 responses. > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:497) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:306) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.service.ClientState.login(ClientState.java:269) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-2.2.8.jar:2.2.8] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-2.2.8.jar:2.2.8] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at >
[jira] [Updated] (CASSANDRA-11381) Node running with join_ring=false and authentication can not serve requests
[ https://issues.apache.org/jira/browse/CASSANDRA-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mck updated CASSANDRA-11381: Fix Version/s: (was: 3.11.x) (was: 4.x) (was: 3.0.x) (was: 2.2.x) 2.2.10 3.0.14 3.11.0 4.0 > Node running with join_ring=false and authentication can not serve requests > --- > > Key: CASSANDRA-11381 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11381 > Project: Cassandra > Issue Type: Bug >Reporter: mck >Assignee: mck > Fix For: 2.2.10, 3.0.14, 3.11.0, 4.0 > > > Starting up a node with {{-Dcassandra.join_ring=false}} in a cluster that has > authentication configured, eg PasswordAuthenticator, won't be able to serve > requests. This is because {{Auth.setup()}} never gets called during the > startup. > Without {{Auth.setup()}} having been called in {{StorageService}} clients > connecting to the node fail with the node throwing > {noformat} > java.lang.NullPointerException > at > org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:119) > at > org.apache.cassandra.thrift.CassandraServer.login(CassandraServer.java:1471) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3505) > at > org.apache.cassandra.thrift.Cassandra$Processor$login.getResult(Cassandra.java:3489) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at com.thinkaurelius.thrift.Message.invoke(Message.java:314) > at > com.thinkaurelius.thrift.Message$Invocation.execute(Message.java:90) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:695) > at > com.thinkaurelius.thrift.TDisruptorServer$InvocationHandler.onEvent(TDisruptorServer.java:689) > at com.lmax.disruptor.WorkProcessor.run(WorkProcessor.java:112) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > {noformat} > The exception thrown from the > [code|https://github.com/apache/cassandra/blob/cassandra-2.0.16/src/java/org/apache/cassandra/auth/PasswordAuthenticator.java#L119] > {code} > ResultMessage.Rows rows = > authenticateStatement.execute(QueryState.forInternalCalls(), new > QueryOptions(consistencyForUser(username), > >Lists.newArrayList(ByteBufferUtil.bytes(username; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-12103) Cassandra is hang and cqlsh was not able to login with OperationTimeout error
[ https://issues.apache.org/jira/browse/CASSANDRA-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12103: --- Labels: proposed-wontfix (was: ) > Cassandra is hang and cqlsh was not able to login with OperationTimeout error > - > > Key: CASSANDRA-12103 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12103 > Project: Cassandra > Issue Type: Bug > Components: Core, Local Write-Read Paths > Environment: centos 6.5 cassandra 2.1.9 >Reporter: peng xiao >Priority: Critical > Labels: proposed-wontfix > Attachments: system.log.2016-06-28_1257.gz > > > Hi, > We have two DCs(DC1 and DC2) with DC1 3 nodes and DC2 9 nodes. > And we experienced a Timeout error today,all applications connected to DC1 > were hang and no response,even cqlsh was not able to log into any node in DC1. > I restarted the 3 nodes in DC1,the problem was not resolved. > Then we switched to DC2,then applications back to normal. > Could you please help to take a look? > Thanks > many errors like below: > ERROR [SharedPool-Worker-43] 2016-06-28 11:58:49,705 Message.java:538 - > Unexpected exception during request; channel = [id: 0x87e315d6, > /172.16.10.198:13604 => /172.16.11.13:9042] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - > received only 0 responses. > at org.apache.cassandra.auth.Auth.selectUser(Auth.java:276) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:86) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.service.ClientState.login(ClientState.java:206) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:82) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439) > [apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335) > [apache-cassandra-2.1.9.jar:2.1.9] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > [apache-cassandra-2.1.9.jar:2.1.9] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-2.1.9.jar:2.1.9] > at java.lang.Thread.run(Thread.java:744) [na:1.8.0] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12103) Cassandra is hang and cqlsh was not able to login with OperationTimeout error
[ https://issues.apache.org/jira/browse/CASSANDRA-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287079#comment-16287079 ] Jeff Jirsa commented on CASSANDRA-12103: There doesn't seem to be much here that points to a concrete Cassandra bug. Do you have any more info to reproduce? 2.1 is in critical patches only mode at this point, so without a concrete bug identified, I propose we close this as wont-fix. > Cassandra is hang and cqlsh was not able to login with OperationTimeout error > - > > Key: CASSANDRA-12103 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12103 > Project: Cassandra > Issue Type: Bug > Components: Core, Local Write-Read Paths > Environment: centos 6.5 cassandra 2.1.9 >Reporter: peng xiao >Priority: Critical > Labels: proposed-wontfix > Attachments: system.log.2016-06-28_1257.gz > > > Hi, > We have two DCs(DC1 and DC2) with DC1 3 nodes and DC2 9 nodes. > And we experienced a Timeout error today,all applications connected to DC1 > were hang and no response,even cqlsh was not able to log into any node in DC1. > I restarted the 3 nodes in DC1,the problem was not resolved. > Then we switched to DC2,then applications back to normal. > Could you please help to take a look? > Thanks > many errors like below: > ERROR [SharedPool-Worker-43] 2016-06-28 11:58:49,705 Message.java:538 - > Unexpected exception during request; channel = [id: 0x87e315d6, > /172.16.10.198:13604 => /172.16.11.13:9042] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.ReadTimeoutException: Operation timed out - > received only 0 responses. > at org.apache.cassandra.auth.Auth.selectUser(Auth.java:276) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at org.apache.cassandra.auth.Auth.isExistingUser(Auth.java:86) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.service.ClientState.login(ClientState.java:206) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:82) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:439) > [apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:335) > [apache-cassandra-2.1.9.jar:2.1.9] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > [apache-cassandra-2.1.9.jar:2.1.9] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-2.1.9.jar:2.1.9] > at java.lang.Thread.run(Thread.java:744) [na:1.8.0] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10937) OOM on multiple nodes on write load (v. 3.0.0), problem also present on DSE-4.8.3, but there it survives more time
[ https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-10937: --- Labels: proposed-wontfix (was: ) > OOM on multiple nodes on write load (v. 3.0.0), problem also present on > DSE-4.8.3, but there it survives more time > -- > > Key: CASSANDRA-10937 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10937 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra : 3.0.0 > Installed as open archive, no connection to any OS specific installer. > Java: > Java(TM) SE Runtime Environment (build 1.8.0_65-b17) > OS : > Linux version 2.6.32-431.el6.x86_64 > (mockbu...@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red > Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013 > We have: > 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each > physical host keeps 4 guests. > Physical host parameters(shared by all 4 guests): > Model: HP ProLiant DL380 Gen9 > Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz > 46 logical processors. > Hyperthreading - enabled > Each guest assigned to have: > 1 disk 300 Gb for seq. log (NOT SSD) > 1 disk 4T for data (NOT SSD) > 11 CPU cores > Disks are local, not shared. > Memory on each host - 24 Gb total. > 8 (or 6, tested both) Gb - cassandra heap > (lshw and cpuinfo attached in file test2.rar) >Reporter: Peter Kovgan >Priority: Critical > Labels: proposed-wontfix > Attachments: cassandra-to-jack-krupansky.docx, gc-stat.txt, > more-logs.rar, some-heap-stats.rar, test2.rar, test3.rar, test4.rar, > test5.rar, test_2.1.rar, test_2.1_logs_older.rar, > test_2.1_restart_attempt_log.rar > > > 8 cassandra nodes. > Load test started with 4 clients(different and not equal machines), each > running 1000 threads. > Each thread assigned in round-robin way to run one of 4 different inserts. > Consistency->ONE. > I attach the full CQL schema of tables and the query of insert. > Replication factor - 2: > create keyspace OBLREPOSITORY_NY with replication = > {'class':'NetworkTopologyStrategy','NY':2}; > Initiall throughput is: > 215.000 inserts /sec > or > 54Mb/sec, considering single insert size a bit larger than 256byte. > Data: > all fields(5-6) are short strings, except one is BLOB of 256 bytes. > After about a 2-3 hours of work, I was forced to increase timeout from 2000 > to 5000ms, for some requests failed for short timeout. > Later on(after aprox. 12 hous of work) OOM happens on multiple nodes. > (all failed nodes logs attached) > I attach also java load client and instructions how set-up and use > it.(test2.rar) > Update: > Later on test repeated with lesser load (10 mes/sec) with more relaxed > CPU (idle 25%), with only 2 test clients, but anyway test failed. > Update: > DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours, > not 10-12. > Attachments: > test2.rar -contains most of material > more-logs.rar - contains additional nodes logs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12857) Upgrade procedure between 2.1.x and 3.0.x is broken
[ https://issues.apache.org/jira/browse/CASSANDRA-12857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287076#comment-16287076 ] Jeff Jirsa commented on CASSANDRA-12857: There doesn't seem to be much here that points to a concrete Cassandra bug. Do you have any more info to reproduce? Without a concrete bug identified or information to advance debugging efforts, I propose we close this as unable to reproduce. > Upgrade procedure between 2.1.x and 3.0.x is broken > --- > > Key: CASSANDRA-12857 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12857 > Project: Cassandra > Issue Type: Bug >Reporter: Alexander Yasnogor >Priority: Critical > Attachments: cassandra.schema > > > It is not possible safely to do Cassandra in place upgrade from 2.1.14 to > 3.0.9. > Distribution: deb packages from datastax community repo. > The upgrade was performed according to procedure from this docu: > https://docs.datastax.com/en/upgrade/doc/upgrade/cassandra/upgrdCassandraDetails.html > Potential reason: The upgrade procedure creates corrupted system_schema and > this keyspace get populated in the cluster and kills it. > We started with one datacenter which contains 19 nodes divided to two racks. > First rack was successfully upgraded and nodetool describecluster reported > two schema versions. One for upgraded nodes, another for non-upgraded nodes. > On starting new version on a first node from the second rack: > {code:java} > INFO [main] 2016-10-25 13:06:12,103 LegacySchemaMigrator.java:87 - Moving 11 > keyspaces from legacy schema tables to the new schema keyspace (system_schema) > INFO [main] 2016-10-25 13:06:12,104 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7505e6ac > INFO [main] 2016-10-25 13:06:12,200 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@64414574 > INFO [main] 2016-10-25 13:06:12,204 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@3f2c5f45 > INFO [main] 2016-10-25 13:06:12,207 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2bc2d64d > INFO [main] 2016-10-25 13:06:12,301 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@77343846 > INFO [main] 2016-10-25 13:06:12,305 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@19b0b931 > INFO [main] 2016-10-25 13:06:12,308 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@44bb0b35 > INFO [main] 2016-10-25 13:06:12,311 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@79f6cd51 > INFO [main] 2016-10-25 13:06:12,319 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@2fcd363b > INFO [main] 2016-10-25 13:06:12,356 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@609eead6 > INFO [main] 2016-10-25 13:06:12,358 LegacySchemaMigrator.java:148 - > Migrating keyspace > org.apache.cassandra.schema.LegacySchemaMigrator$Keyspace@7eb7f5d0 > INFO [main] 2016-10-25 13:06:13,958 LegacySchemaMigrator.java:97 - > Truncating legacy schema tables > INFO [main] 2016-10-25 13:06:26,474 LegacySchemaMigrator.java:103 - > Completed migration of legacy schema tables > INFO [main] 2016-10-25 13:06:26,474 StorageService.java:521 - Populating > token metadata from system tables > INFO [main] 2016-10-25 13:06:26,796 StorageService.java:528 - Token > metadata: Normal Tokens: [HUGE LIST of tokens] > INFO [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - > Initializing ... > INFO [main] 2016-10-25 13:06:29,066 ColumnFamilyStore.java:389 - > Initializing ... > INFO [main] 2016-10-25 13:06:45,894 AutoSavingCache.java:165 - Completed > loading (2 ms; 460 keys) KeyCache cache > INFO [main] 2016-10-25 13:06:46,982 StorageService.java:521 - Populating > token metadata from system tables > INFO [main] 2016-10-25 13:06:47,394 StorageService.java:528 - Token > metadata: Normal Tokens:[HUGE LIST of tokens] > INFO [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:88 - Migrating > legacy hints to new storage > INFO [main] 2016-10-25 13:06:47,420 LegacyHintsMigrator.java:91 - Forcing a > major compaction of system.hints table > INFO [main] 2016-10-25 13:06:50,587 LegacyHintsMigrator.java:95 - Writing > legacy hints to the new storage > INFO [main] 2016-10-25 13:06:53,927
[jira] [Commented] (CASSANDRA-12978) mx4j -> HTTP 500 -> ConcurrentModificationException
[ https://issues.apache.org/jira/browse/CASSANDRA-12978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287068#comment-16287068 ] Jeff Jirsa commented on CASSANDRA-12978: Proposing we close this - {{StreamingHistogram}} has been rewritten in 3.0, and again in 4.0, it's now thread safe, and this isn't critical enough for a 2.1 patch at this point. Any objections? > mx4j -> HTTP 500 -> ConcurrentModificationException > --- > > Key: CASSANDRA-12978 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12978 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Debian, Single cluster, 2 data centres, E5-2620 v3, > 16GB, RAID1 SSD Commit log, RAID10 15k HDD data >Reporter: Rob Emery >Priority: Critical > Fix For: 2.1.6 > > > We run some checks from our Monitoring software that rely on mx4j. > The checks typically grab some xml via HTTP request and parse it. For > example, CF Stats on 'MyKeySpace' and 'MyColumnFamily' are retrieved > using: > http://cassandra001:8081/mbean?template=identity=org.apache.cassandra.db%3Atype%3DColumnFamilies%2Ckeyspace%3DMyKeySpace%2Ccolumnfamily%3DMyColumnFamily > The checks run each minute. Periodically they result in a "HTTP 500 internal > server error". The HTML body returned is empty. > Experimentally we ran Cassandra in the foreground on one node and reproduced > the problem. this elicited the following stack trace: > javax.management.RuntimeMBeanException: > java.util.ConcurrentModificationException > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:651) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678) > at > mx4j.tools.adaptor.http.MBeanCommandProcessor.createMBeanElement(MBeanCommandProcessor.java:119) > at > mx4j.tools.adaptor.http.MBeanCommandProcessor.executeRequest(MBeanCommandProcessor.java:56) > at > mx4j.tools.adaptor.http.HttpAdaptor$HttpClient.run(HttpAdaptor.java:980) > Caused by: java.util.ConcurrentModificationException > at > java.util.TreeMap$NavigableSubMap$SubMapIterator.nextEntry(TreeMap.java:1594) > at > java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1642) > at > java.util.TreeMap$NavigableSubMap$SubMapEntryIterator.next(TreeMap.java:1636) > at java.util.AbstractMap$2$1.next(AbstractMap.java:385) > at > org.apache.cassandra.utils.StreamingHistogram.sum(StreamingHistogram.java:160) > at > org.apache.cassandra.io.sstable.metadata.StatsMetadata.getDroppableTombstonesBefore(StatsMetadata.java:113) > at > org.apache.cassandra.io.sstable.SSTableReader.getDroppableTombstonesBefore(SSTableReader.java:2004) > at > org.apache.cassandra.db.DataTracker.getDroppableTombstoneRatio(DataTracker.java:507) > at > org.apache.cassandra.db.ColumnFamilyStore.getDroppableTombstoneRatio(ColumnFamilyStore.java:3089) > at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) > at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) > at > com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) > at > com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83) > at > com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647) > ... 4 more -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-11460) memory leak
[ https://issues.apache.org/jira/browse/CASSANDRA-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-11460: --- Labels: proposed-wontfix (was: ) > memory leak > --- > > Key: CASSANDRA-11460 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11460 > Project: Cassandra > Issue Type: Bug >Reporter: stone >Priority: Critical > Labels: proposed-wontfix > Attachments: aaa.jpg > > > env: > cassandra3.3 > jdk8 > 8G Ram > so set > MAX_HEAP_SIZE="2G" > HEAP_NEWSIZE="400M" > 1.met same problem about this: > https://issues.apache.org/jira/browse/CASSANDRA-9549 > I confuse about that this was fixed in release 3.3 according this page: > https://github.com/apache/cassandra/blob/trunk/CHANGES.txt > so I change to 3.4,and also have found this problem again > I think this fix should be included in 3.3.3.4 > can you explain about this? > 2.our write rate exceed the value that our cassandra env can support, > but i think it should descrese the write rate,or block.consumer the writed > data,keep the memory down,then go on writing,not cause out-of-memory instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-11460) memory leak
[ https://issues.apache.org/jira/browse/CASSANDRA-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287074#comment-16287074 ] Jeff Jirsa commented on CASSANDRA-11460: This is for a very old version of Cassandra and doesn't have much info to reproduce. I propose closing it if there's no new info soon. > memory leak > --- > > Key: CASSANDRA-11460 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11460 > Project: Cassandra > Issue Type: Bug >Reporter: stone >Priority: Critical > Labels: proposed-wontfix > Attachments: aaa.jpg > > > env: > cassandra3.3 > jdk8 > 8G Ram > so set > MAX_HEAP_SIZE="2G" > HEAP_NEWSIZE="400M" > 1.met same problem about this: > https://issues.apache.org/jira/browse/CASSANDRA-9549 > I confuse about that this was fixed in release 3.3 according this page: > https://github.com/apache/cassandra/blob/trunk/CHANGES.txt > so I change to 3.4,and also have found this problem again > I think this fix should be included in 3.3.3.4 > can you explain about this? > 2.our write rate exceed the value that our cassandra env can support, > but i think it should descrese the write rate,or block.consumer the writed > data,keep the memory down,then go on writing,not cause out-of-memory instead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
[ https://issues.apache.org/jira/browse/CASSANDRA-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-14104: Description: Creating a compact storage table with dynamic composite type, then running {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will crash Cassandra node. (was: Creating a compact storage table with dynamic composite type, then running {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will crash Cassa) > Index target doesn't correctly recognise non-UTF column names after COMPACT > STORAGE drop > > > Key: CASSANDRA-14104 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > > Creating a compact storage table with dynamic composite type, then running > {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will > crash Cassandra node. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14105) Trivial log format error
Jay Zhuang created CASSANDRA-14105: -- Summary: Trivial log format error Key: CASSANDRA-14105 URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 Project: Cassandra Issue Type: Bug Reporter: Jay Zhuang Assignee: Jay Zhuang Priority: Trivial The same issue as CASSANDRA-13551 The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286386#comment-16286386 ] Jay Zhuang commented on CASSANDRA-14105: | Branch | uTest | | [14105|https://github.com/cooldoger/cassandra/tree/14105] | [!https://circleci.com/gh/cooldoger/cassandra/tree/14105.svg?style=svg!|https://circleci.com/gh/cooldoger/cassandra/tree/14105] | > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-14106) utest failed: DistributionSequenceTest.setSeed() and simpleSequence()
[ https://issues.apache.org/jira/browse/CASSANDRA-14106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang reassigned CASSANDRA-14106: -- Assignee: Jay Zhuang > utest failed: DistributionSequenceTest.setSeed() and simpleSequence() > - > > Key: CASSANDRA-14106 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14106 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang > > To reproduce: > {noformat} > $ ant stress-test -Dtest.name=DistributionSequenceTest > {noformat} > {noformat} > stress-test: > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest > [junit] Testsuite: > org.apache.cassandra.stress.generate.DistributionSequenceTest Tests run: 4, > Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec > [junit] > [junit] Testcase: > simpleSequence(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.simpleSequence(DistributionSequenceTest.java:37) > [junit] > [junit] > [junit] Testcase: > setSeed(org.apache.cassandra.stress.generate.DistributionSequenceTest): > FAILED > [junit] expected:<5> but was:<4> > [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> > [junit] at > org.apache.cassandra.stress.generate.DistributionSequenceTest.setSeed(DistributionSequenceTest.java:111) > [junit] > [junit] > [junit] Test > org.apache.cassandra.stress.generate.DistributionSequenceTest FAILED > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12917) Increase error margin in SplitterTest
[ https://issues.apache.org/jira/browse/CASSANDRA-12917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286974#comment-16286974 ] Jeff Jirsa commented on CASSANDRA-12917: Friendly ping that this is sitting in "Ready to Commit" and has been for quite some time. Still applicable? Going to improve it or commit as is? > Increase error margin in SplitterTest > - > > Key: CASSANDRA-12917 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12917 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Labels: lhf > Fix For: 3.11.x > > > SplitterTest is a randomized test - it generates random tokens and splits the > ranges in equal parts. Since it is random we sometimes get very big vnodes > right where we want a split and that makes the split unbalanced > Bumping the error margin a bit will avoid these false positives. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13006) Disable automatic heap dumps on OOM error
[ https://issues.apache.org/jira/browse/CASSANDRA-13006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286970#comment-16286970 ] Jeff Jirsa commented on CASSANDRA-13006: Friendly ping [~blerer] that this is sitting in "Ready to Commit". > Disable automatic heap dumps on OOM error > - > > Key: CASSANDRA-13006 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13006 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: anmols >Assignee: Benjamin Lerer >Priority: Minor > Attachments: 13006-3.0.9.txt > > > With CASSANDRA-9861, a change was added to enable collecting heap dumps by > default if the process encountered an OOM error. These heap dumps are stored > in the Apache Cassandra home directory unless configured otherwise (see > [Cassandra Support > Document|https://support.datastax.com/hc/en-us/articles/204225959-Generating-and-Analyzing-Heap-Dumps] > for this feature). >  > The creation and storage of heap dumps aides debugging and investigative > workflows, but is not be desirable for a production environment where these > heap dumps may occupy a large amount of disk space and require manual > intervention for cleanups. >  > Managing heap dumps on out of memory errors and configuring the paths for > these heap dumps are available as JVM options in JVM. The current behavior > conflicts with the Boolean JVM flag HeapDumpOnOutOfMemoryError. >  > A patch can be proposed here that would make the heap dump on OOM error honor > the HeapDumpOnOutOfMemoryError flag. Users who would want to still generate > heap dumps on OOM errors can set the -XX:+HeapDumpOnOutOfMemoryError JVM > option. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
[ https://issues.apache.org/jira/browse/CASSANDRA-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286968#comment-16286968 ] ZhaoYang commented on CASSANDRA-14104: -- LGTM, thanks for the fix. > Index target doesn't correctly recognise non-UTF column names after COMPACT > STORAGE drop > > > Key: CASSANDRA-14104 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > > Creating a compact storage table with dynamic composite type, then running > {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will > crash Cassandra node, since the Index Target is fetched using hashmap / > strict equality. We need to fall back to linear search when index target > can't be found (which should not be happening often). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
[ https://issues.apache.org/jira/browse/CASSANDRA-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhaoYang updated CASSANDRA-14104: - Status: Ready to Commit (was: Patch Available) > Index target doesn't correctly recognise non-UTF column names after COMPACT > STORAGE drop > > > Key: CASSANDRA-14104 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > > Creating a compact storage table with dynamic composite type, then running > {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will > crash Cassandra node, since the Index Target is fetched using hashmap / > strict equality. We need to fall back to linear search when index target > can't be found (which should not be happening often). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-14105: --- Resolution: Fixed Fix Version/s: 4.0 Status: Resolved (was: Ready to Commit) Thanks! Committed as {{e18a49a2399a3fe667c3c08d7350b7528614f0a6}} > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > Fix For: 4.0 > > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
cassandra git commit: Fix trivial log format error
Repository: cassandra Updated Branches: refs/heads/trunk 8547f7471 -> e18a49a23 Fix trivial log format error Patch by Jay Zhuang; reviewed by Jeff Jirsa for CASSANDRA-14105 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e18a49a2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e18a49a2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e18a49a2 Branch: refs/heads/trunk Commit: e18a49a2399a3fe667c3c08d7350b7528614f0a6 Parents: 8547f74 Author: Jay ZhuangAuthored: Mon Dec 11 10:25:58 2017 -0800 Committer: Jeff Jirsa Committed: Mon Dec 11 17:39:31 2017 -0800 -- CHANGES.txt| 1 + .../org/apache/cassandra/db/compaction/CompactionManager.java | 2 +- .../org/apache/cassandra/db/marshal/DynamicCompositeType.java | 6 ++ src/java/org/apache/cassandra/service/StorageProxy.java| 4 ++-- 4 files changed, 6 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e18a49a2/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2017aff..acd4996 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 4.0 + * Fix trivial log format error (CASSANDRA-14015) * Allow sstabledump to do a json object per partition (CASSANDRA-13848) * Add option to optimise merkle tree comparison across replicas (CASSANDRA-3200) * Remove unused and deprecated methods from AbstractCompactionStrategy (CASSANDRA-14081) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e18a49a2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index c57b37a..cc4d078 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -348,7 +348,7 @@ public class CompactionManager implements CompactionManagerMBean { Throwable fail = Throwables.close(null, transactions); if (fail != null) -logger.error("Failed to cleanup lifecycle transactions {}", fail); +logger.error("Failed to cleanup lifecycle transactions", fail); } } http://git-wip-us.apache.org/repos/asf/cassandra/blob/e18a49a2/src/java/org/apache/cassandra/db/marshal/DynamicCompositeType.java -- diff --git a/src/java/org/apache/cassandra/db/marshal/DynamicCompositeType.java b/src/java/org/apache/cassandra/db/marshal/DynamicCompositeType.java index c6eeecf..6fa7e87 100644 --- a/src/java/org/apache/cassandra/db/marshal/DynamicCompositeType.java +++ b/src/java/org/apache/cassandra/db/marshal/DynamicCompositeType.java @@ -203,15 +203,13 @@ public class DynamicCompositeType extends AbstractCompositeType { // ByteBufferUtil.string failed. // Log it here and we'll further throw an exception below since comparator == null -logger.error("Failed with [{}] when decoding the byte buffer in ByteBufferUtil.string()", - ce); +logger.error("Failed when decoding the byte buffer in ByteBufferUtil.string()", ce); } catch (Exception e) { // parse failed. // Log it here and we'll further throw an exception below since comparator == null -logger.error("Failed to parse value string \"{}\" with exception: [{}]", - valueStr, e); +logger.error("Failed to parse value string \"{}\" with exception:", valueStr, e); } } else http://git-wip-us.apache.org/repos/asf/cassandra/blob/e18a49a2/src/java/org/apache/cassandra/service/StorageProxy.java -- diff --git a/src/java/org/apache/cassandra/service/StorageProxy.java b/src/java/org/apache/cassandra/service/StorageProxy.java index aa5d0cc..be0cf0f 100644 --- a/src/java/org/apache/cassandra/service/StorageProxy.java +++ b/src/java/org/apache/cassandra/service/StorageProxy.java @@ -525,7 +525,7 @@ public class StorageProxy implements StorageProxyMBean } catch (Exception ex) { -logger.error("Failed paxos prepare locally : {}", ex); +logger.error("Failed paxos prepare locally", ex); }
[jira] [Updated] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-14105: --- Status: Ready to Commit (was: Patch Available) > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-14105: --- Status: Patch Available (was: In Progress) > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286895#comment-16286895 ] Jay Zhuang commented on CASSANDRA-14105: I created task for failed utest: CASSANDRA-14106. It's not related to this change. > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286895#comment-16286895 ] Jay Zhuang edited comment on CASSANDRA-14105 at 12/12/17 12:54 AM: --- I created a JIRA for failed utest: CASSANDRA-14106. It's not related to this change. was (Author: jay.zhuang): I created task for failed utest: CASSANDRA-14106. It's not related to this change. > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14106) utest failed: DistributionSequenceTest.setSeed() and simpleSequence()
Jay Zhuang created CASSANDRA-14106: -- Summary: utest failed: DistributionSequenceTest.setSeed() and simpleSequence() Key: CASSANDRA-14106 URL: https://issues.apache.org/jira/browse/CASSANDRA-14106 Project: Cassandra Issue Type: Bug Components: Testing Reporter: Jay Zhuang To reproduce: {noformat} $ ant stress-test -Dtest.name=DistributionSequenceTest {noformat} {noformat} stress-test: [junit] Testsuite: org.apache.cassandra.stress.generate.DistributionSequenceTest [junit] Testsuite: org.apache.cassandra.stress.generate.DistributionSequenceTest Tests run: 4, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec [junit] [junit] Testcase: simpleSequence(org.apache.cassandra.stress.generate.DistributionSequenceTest): FAILED [junit] expected:<5> but was:<4> [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> [junit] at org.apache.cassandra.stress.generate.DistributionSequenceTest.simpleSequence(DistributionSequenceTest.java:37) [junit] [junit] [junit] Testcase: setSeed(org.apache.cassandra.stress.generate.DistributionSequenceTest): FAILED [junit] expected:<5> but was:<4> [junit] junit.framework.AssertionFailedError: expected:<5> but was:<4> [junit] at org.apache.cassandra.stress.generate.DistributionSequenceTest.setSeed(DistributionSequenceTest.java:111) [junit] [junit] [junit] Test org.apache.cassandra.stress.generate.DistributionSequenceTest FAILED {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13936) RangeTombstoneTest (compressed) failure - assertTimes expected:<1000> but was:<999>
[ https://issues.apache.org/jira/browse/CASSANDRA-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286597#comment-16286597 ] Kurt Greaves commented on CASSANDRA-13936: -- I couldn't get this test to fail locally. Can't see a good reason why the fail would have occurred either. > RangeTombstoneTest (compressed) failure - assertTimes expected:<1000> but > was:<999> > --- > > Key: CASSANDRA-13936 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13936 > Project: Cassandra > Issue Type: Bug >Reporter: Jeff Jirsa > Labels: Testing > Fix For: 4.x > > > In circleci run > [here|https://circleci.com/gh/jeffjirsa/cassandra/367#tests/containers/2] : > {code} > [junit] Testsuite: org.apache.cassandra.db.RangeTombstoneTest-compression > [junit] Testsuite: org.apache.cassandra.db.RangeTombstoneTest-compression > Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.945 sec > [junit] > [junit] Testcase: > testTrackTimesRangeTombstoneWithData(org.apache.cassandra.db.RangeTombstoneTest)-compression: >FAILED > [junit] expected:<1000> but was:<999> > [junit] junit.framework.AssertionFailedError: expected:<1000> but > was:<999> > [junit] at > org.apache.cassandra.db.RangeTombstoneTest.assertTimes(RangeTombstoneTest.java:314) > [junit] at > org.apache.cassandra.db.RangeTombstoneTest.testTrackTimesRangeTombstoneWithData(RangeTombstoneTest.java:308) > [junit] > [junit] > [junit] Test org.apache.cassandra.db.RangeTombstoneTest FAILED > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14085) Excessive update of ReadLatency metric in digest calculation
[ https://issues.apache.org/jira/browse/CASSANDRA-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16273593#comment-16273593 ] Andrew Whang edited comment on CASSANDRA-14085 at 12/11/17 9:27 PM: https://github.com/whangsf/cassandra/commit/d6ec955da577da614de0c093625ae175158362c3 was (Author: whangsf): https://github.com/whangsf/cassandra/commit/2ae3589ce9eefd8699bbd4e29bf1c61a486d394e > Excessive update of ReadLatency metric in digest calculation > > > Key: CASSANDRA-14085 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14085 > Project: Cassandra > Issue Type: Bug > Components: Core, Metrics >Reporter: Andrew Whang >Assignee: Andrew Whang >Priority: Minor > Fix For: 3.0.x, 3.11.x, 4.x > > > We noticed an increase in read latency after upgrading to 3.x, specifically > for requests with CL>ONE. It turns out the read latency metric is being > doubly updated for digest calculations. This code > (https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/partitions/UnfilteredPartitionIterators.java#L243) > makes an improper copy of an iterator that's wrapped by MetricRecording, > whose onClose() records the latency of the execution. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14084) Disks can be imbalanced during replace of same address when using JBOD
[ https://issues.apache.org/jira/browse/CASSANDRA-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-14084: Resolution: Fixed Fix Version/s: 4.0 3.11.2 Status: Resolved (was: Ready to Commit) Committed as {{50e6e721b2a81da7f11f60a2fa405fd46e5415d4}} to cassandra-3.11 and merged up to master, and dtest as {{3d2a6cc738d87d30cca8d747305a5899ccf3712d}}. Thanks for the review! > Disks can be imbalanced during replace of same address when using JBOD > -- > > Key: CASSANDRA-14084 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14084 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > Fix For: 3.11.2, 4.0 > > Attachments: dtest14084.png > > > While investigating CASSANDRA-14083, I noticed that [we use the pending > ranges to calculate the disk > boundaries|https://github.com/apache/cassandra/blob/41904684bb5509595d11f008d0851c7ce625e020/src/java/org/apache/cassandra/db/DiskBoundaryManager.java#L91] > when the node is bootstrapping. > The problem is that when the node is replacing a node with the same address, > it [sets itself as normal > locally|https://github.com/apache/cassandra/blob/41904684bb5509595d11f008d0851c7ce625e020/src/java/org/apache/cassandra/service/StorageService.java#L1449] > (for other unrelated reasons), so the local ranges will be null and > consequently the disk boundaries will be null. This will cause the sstables > to be randomly spread across disks potentially causing imbalance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13948) Reload compaction strategies when JBOD disk boundary changes
[ https://issues.apache.org/jira/browse/CASSANDRA-13948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286496#comment-16286496 ] Paulo Motta commented on CASSANDRA-13948: - Committed dtest as {{debe3780a4694c978f2516e565e071782dc7b2c8}}. Thanks! > Reload compaction strategies when JBOD disk boundary changes > > > Key: CASSANDRA-13948 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13948 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Paulo Motta >Assignee: Paulo Motta > Fix For: 3.11.2, 4.0 > > Attachments: 13948dtest.png, 13948testall.png, 3.11-13948-dtest.png, > 3.11-13948-testall.png, debug.log, dtest13948.png, dtest2.png, > threaddump-cleanup.txt, threaddump.txt, trace.log, trunk-13948-dtest.png, > trunk-13948-testall.png > > > The thread dump below shows a race between an sstable replacement by the > {{IndexSummaryRedistribution}} and > {{AbstractCompactionTask.getNextBackgroundTask}}: > {noformat} > Thread 94580: (state = BLOCKED) > - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information > may be imprecise) > - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, > line=175 (Compiled frame) > - > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt() > @bci=1, line=836 (Compiled frame) > - > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(java.util.concurrent.locks.AbstractQueuedSynchronizer$Node, > int) @bci=67, line=870 (Compiled frame) > - java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(int) > @bci=17, line=1199 (Compiled frame) > - java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock() @bci=5, > line=943 (Compiled frame) > - > org.apache.cassandra.db.compaction.CompactionStrategyManager.handleListChangedNotification(java.lang.Iterable, > java.lang.Iterable) @bci=359, line=483 (Interpreted frame) > - > org.apache.cassandra.db.compaction.CompactionStrategyManager.handleNotification(org.apache.cassandra.notifications.INotification, > java.lang.Object) @bci=53, line=555 (Interpreted frame) > - > org.apache.cassandra.db.lifecycle.Tracker.notifySSTablesChanged(java.util.Collection, > java.util.Collection, org.apache.cassandra.db.compaction.OperationType, > java.lang.Throwable) @bci=50, line=409 (Interpreted frame) > - > org.apache.cassandra.db.lifecycle.LifecycleTransaction.doCommit(java.lang.Throwable) > @bci=157, line=227 (Interpreted frame) > - > org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit(java.lang.Throwable) > @bci=61, line=116 (Compiled frame) > - > org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.commit() > @bci=2, line=200 (Interpreted frame) > - > org.apache.cassandra.utils.concurrent.Transactional$AbstractTransactional.finish() > @bci=5, line=185 (Interpreted frame) > - > org.apache.cassandra.io.sstable.IndexSummaryRedistribution.redistributeSummaries() > @bci=559, line=130 (Interpreted frame) > - > org.apache.cassandra.db.compaction.CompactionManager.runIndexSummaryRedistribution(org.apache.cassandra.io.sstable.IndexSummaryRedistribution) > @bci=9, line=1420 (Interpreted frame) > - > org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries(org.apache.cassandra.io.sstable.IndexSummaryRedistribution) > @bci=4, line=250 (Interpreted frame) > - > org.apache.cassandra.io.sstable.IndexSummaryManager.redistributeSummaries() > @bci=30, line=228 (Interpreted frame) > - org.apache.cassandra.io.sstable.IndexSummaryManager$1.runMayThrow() > @bci=4, line=125 (Interpreted frame) > - org.apache.cassandra.utils.WrappedRunnable.run() @bci=1, line=28 > (Interpreted frame) > - > org.apache.cassandra.concurrent.DebuggableScheduledThreadPoolExecutor$UncomplainingRunnable.run() > @bci=4, line=118 (Compiled frame) > - java.util.concurrent.Executors$RunnableAdapter.call() @bci=4, line=511 > (Compiled frame) > - java.util.concurrent.FutureTask.runAndReset() @bci=47, line=308 (Compiled > frame) > - > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask) > @bci=1, line=180 (Compiled frame) > - java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run() > @bci=37, line=294 (Compiled frame) > - > java.util.concurrent.ThreadPoolExecutor.runWorker(java.util.concurrent.ThreadPoolExecutor$Worker) > @bci=95, line=1149 (Compiled frame) > - java.util.concurrent.ThreadPoolExecutor$Worker.run() @bci=5, line=624 > (Interpreted frame) > - > org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(java.lang.Runnable) > @bci=1, line=81 (Interpreted frame) > -
[4/4] cassandra git commit: Ninja: do not submit compactions after flush of empty memtable (reinstate pre-#14081 behavior)
Ninja: do not submit compactions after flush of empty memtable (reinstate pre-#14081 behavior) Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8547f747 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8547f747 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8547f747 Branch: refs/heads/trunk Commit: 8547f74711c14591bd796e1209712a0e4c4b6623 Parents: f8801ca Author: Paulo MottaAuthored: Tue Dec 12 07:02:25 2017 +1100 Committer: Paulo Motta Committed: Tue Dec 12 07:18:49 2017 +1100 -- src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8547f747/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 872cd80..1f7ba87 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -1605,7 +1605,8 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean void replaceFlushed(Memtable memtable, Collection sstables) { data.replaceFlushed(memtable, sstables); -CompactionManager.instance.submitBackground(this); +if (sstables != null && !sstables.isEmpty()) +CompactionManager.instance.submitBackground(this); } public boolean isValid() - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[2/4] cassandra git commit: Fix imbalanced disks when replacing node with same address with JBOD
Fix imbalanced disks when replacing node with same address with JBOD Patch by Paulo Motta; Reviewed by Marcus Eriksson for CASSANDRA-14084 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/50e6e721 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/50e6e721 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/50e6e721 Branch: refs/heads/trunk Commit: 50e6e721b2a81da7f11f60a2fa405fd46e5415d4 Parents: 817f3c2 Author: Paulo MottaAuthored: Fri Dec 1 03:39:14 2017 +1100 Committer: Paulo Motta Committed: Tue Dec 12 07:17:49 2017 +1100 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/DiskBoundaryManager.java | 3 ++- src/java/org/apache/cassandra/service/StorageService.java | 3 ++- 3 files changed, 5 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e6e721/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 5faede2..6e9a0bd 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.2 + * Fix imbalanced disks when replacing node with same address with JBOD (CASSANDRA-14084) * Reload compaction strategies when disk boundaries are invalidated (CASSANDRA-13948) * Remove OpenJDK log warning (CASSANDRA-13916) * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079) http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e6e721/src/java/org/apache/cassandra/db/DiskBoundaryManager.java -- diff --git a/src/java/org/apache/cassandra/db/DiskBoundaryManager.java b/src/java/org/apache/cassandra/db/DiskBoundaryManager.java index 14d3983..ad6a67e 100644 --- a/src/java/org/apache/cassandra/db/DiskBoundaryManager.java +++ b/src/java/org/apache/cassandra/db/DiskBoundaryManager.java @@ -75,7 +75,8 @@ public class DiskBoundaryManager { tmd = StorageService.instance.getTokenMetadata(); ringVersion = tmd.getRingVersion(); -if (StorageService.instance.isBootstrapMode()) +if (StorageService.instance.isBootstrapMode() +&& !StorageService.isReplacingSameAddress()) // When replacing same address, the node marks itself as UN locally { localRanges = tmd.getPendingRanges(cfs.keyspace.getName(), FBUtilities.getBroadcastAddress()); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e6e721/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index fafe8e8..15027b2 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -1026,7 +1026,8 @@ public class StorageService extends NotificationBroadcasterSupport implements IE public static boolean isReplacingSameAddress() { -return DatabaseDescriptor.getReplaceAddress().equals(FBUtilities.getBroadcastAddress()); +InetAddress replaceAddress = DatabaseDescriptor.getReplaceAddress(); +return replaceAddress != null && replaceAddress.equals(FBUtilities.getBroadcastAddress()); } public void gossipSnitchInfo() - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[1/4] cassandra git commit: Fix imbalanced disks when replacing node with same address with JBOD
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 817f3c282 -> 50e6e721b refs/heads/trunk 1cb050922 -> 8547f7471 Fix imbalanced disks when replacing node with same address with JBOD Patch by Paulo Motta; Reviewed by Marcus Eriksson for CASSANDRA-14084 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/50e6e721 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/50e6e721 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/50e6e721 Branch: refs/heads/cassandra-3.11 Commit: 50e6e721b2a81da7f11f60a2fa405fd46e5415d4 Parents: 817f3c2 Author: Paulo MottaAuthored: Fri Dec 1 03:39:14 2017 +1100 Committer: Paulo Motta Committed: Tue Dec 12 07:17:49 2017 +1100 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/DiskBoundaryManager.java | 3 ++- src/java/org/apache/cassandra/service/StorageService.java | 3 ++- 3 files changed, 5 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e6e721/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 5faede2..6e9a0bd 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.2 + * Fix imbalanced disks when replacing node with same address with JBOD (CASSANDRA-14084) * Reload compaction strategies when disk boundaries are invalidated (CASSANDRA-13948) * Remove OpenJDK log warning (CASSANDRA-13916) * Prevent compaction strategies from looping indefinitely (CASSANDRA-14079) http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e6e721/src/java/org/apache/cassandra/db/DiskBoundaryManager.java -- diff --git a/src/java/org/apache/cassandra/db/DiskBoundaryManager.java b/src/java/org/apache/cassandra/db/DiskBoundaryManager.java index 14d3983..ad6a67e 100644 --- a/src/java/org/apache/cassandra/db/DiskBoundaryManager.java +++ b/src/java/org/apache/cassandra/db/DiskBoundaryManager.java @@ -75,7 +75,8 @@ public class DiskBoundaryManager { tmd = StorageService.instance.getTokenMetadata(); ringVersion = tmd.getRingVersion(); -if (StorageService.instance.isBootstrapMode()) +if (StorageService.instance.isBootstrapMode() +&& !StorageService.isReplacingSameAddress()) // When replacing same address, the node marks itself as UN locally { localRanges = tmd.getPendingRanges(cfs.keyspace.getName(), FBUtilities.getBroadcastAddress()); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/50e6e721/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index fafe8e8..15027b2 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -1026,7 +1026,8 @@ public class StorageService extends NotificationBroadcasterSupport implements IE public static boolean isReplacingSameAddress() { -return DatabaseDescriptor.getReplaceAddress().equals(FBUtilities.getBroadcastAddress()); +InetAddress replaceAddress = DatabaseDescriptor.getReplaceAddress(); +return replaceAddress != null && replaceAddress.equals(FBUtilities.getBroadcastAddress()); } public void gossipSnitchInfo() - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[3/4] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f8801ca6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f8801ca6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f8801ca6 Branch: refs/heads/trunk Commit: f8801ca6a56400d40d0d1f54bdb5085862fa50be Parents: 1cb0509 50e6e72 Author: Paulo MottaAuthored: Tue Dec 12 07:17:58 2017 +1100 Committer: Paulo Motta Committed: Tue Dec 12 07:17:58 2017 +1100 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/DiskBoundaryManager.java | 3 ++- src/java/org/apache/cassandra/service/StorageService.java | 3 ++- 3 files changed, 5 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f8801ca6/CHANGES.txt -- diff --cc CHANGES.txt index eb37da6,6e9a0bd..2017aff --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,176 -1,5 +1,177 @@@ +4.0 + * Allow sstabledump to do a json object per partition (CASSANDRA-13848) + * Add option to optimise merkle tree comparison across replicas (CASSANDRA-3200) + * Remove unused and deprecated methods from AbstractCompactionStrategy (CASSANDRA-14081) + * Fix Distribution.average in cassandra-stress (CASSANDRA-14090) + * Support a means of logging all queries as they were invoked (CASSANDRA-13983) + * Presize collections (CASSANDRA-13760) + * Add GroupCommitLogService (CASSANDRA-13530) + * Parallelize initial materialized view build (CASSANDRA-12245) + * Fix flaky SecondaryIndexManagerTest.assert[Not]MarkedAsBuilt (CASSANDRA-13965) + * Make LWTs send resultset metadata on every request (CASSANDRA-13992) + * Fix flaky indexWithFailedInitializationIsNotQueryableAfterPartialRebuild (CASSANDRA-13963) + * Introduce leaf-only iterator (CASSANDRA-9988) + * Upgrade Guava to 23.3 and Airline to 0.8 (CASSANDRA-13997) + * Allow only one concurrent call to StatusLogger (CASSANDRA-12182) + * Refactoring to specialised functional interfaces (CASSANDRA-13982) + * Speculative retry should allow more friendly params (CASSANDRA-13876) + * Throw exception if we send/receive repair messages to incompatible nodes (CASSANDRA-13944) + * Replace usages of MessageDigest with Guava's Hasher (CASSANDRA-13291) + * Add nodetool cmd to print hinted handoff window (CASSANDRA-13728) + * Fix some alerts raised by static analysis (CASSANDRA-13799) + * Checksum sstable metadata (CASSANDRA-13321, CASSANDRA-13593) + * Add result set metadata to prepared statement MD5 hash calculation (CASSANDRA-10786) + * Refactor GcCompactionTest to avoid boxing (CASSANDRA-13941) + * Expose recent histograms in JmxHistograms (CASSANDRA-13642) + * Fix buffer length comparison when decompressing in netty-based streaming (CASSANDRA-13899) + * Properly close StreamCompressionInputStream to release any ByteBuf (CASSANDRA-13906) + * Add SERIAL and LOCAL_SERIAL support for cassandra-stress (CASSANDRA-13925) + * LCS needlessly checks for L0 STCS candidates multiple times (CASSANDRA-12961) + * Correctly close netty channels when a stream session ends (CASSANDRA-13905) + * Update lz4 to 1.4.0 (CASSANDRA-13741) + * Optimize Paxos prepare and propose stage for local requests (CASSANDRA-13862) + * Throttle base partitions during MV repair streaming to prevent OOM (CASSANDRA-13299) + * Use compaction threshold for STCS in L0 (CASSANDRA-13861) + * Fix problem with min_compress_ratio: 1 and disallow ratio < 1 (CASSANDRA-13703) + * Add extra information to SASI timeout exception (CASSANDRA-13677) + * Add incremental repair support for --hosts, --force, and subrange repair (CASSANDRA-13818) + * Rework CompactionStrategyManager.getScanners synchronization (CASSANDRA-13786) + * Add additional unit tests for batch behavior, TTLs, Timestamps (CASSANDRA-13846) + * Add keyspace and table name in schema validation exception (CASSANDRA-13845) + * Emit metrics whenever we hit tombstone failures and warn thresholds (CASSANDRA-13771) + * Make netty EventLoopGroups daemon threads (CASSANDRA-13837) + * Race condition when closing stream sessions (CASSANDRA-13852) + * NettyFactoryTest is failing in trunk on macOS (CASSANDRA-13831) + * Allow changing log levels via nodetool for related classes (CASSANDRA-12696) + * Add stress profile yaml with LWT (CASSANDRA-7960) + * Reduce memory copies and object creations when acting on ByteBufs (CASSANDRA-13789) + * Simplify mx4j configuration (Cassandra-13578) + * Fix trigger example on 4.0 (CASSANDRA-13796) + * Force minumum timeout value (CASSANDRA-9375) + * Use netty for streaming (CASSANDRA-12229) + * Use netty for internode messaging (CASSANDRA-8457) + * Add
[1/2] cassandra-dtest git commit: Add test for CASSANDRA-13948
Repository: cassandra-dtest Updated Branches: refs/heads/master 59058a001 -> 3d2a6cc73 Add test for CASSANDRA-13948 Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/debe3780 Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/debe3780 Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/debe3780 Branch: refs/heads/master Commit: debe3780a4694c978f2516e565e071782dc7b2c8 Parents: 59058a0 Author: Paulo MottaAuthored: Tue Oct 31 14:28:48 2017 +1100 Committer: Paulo Motta Committed: Tue Dec 12 07:21:54 2017 +1100 -- disk_balance_test.py | 179 +- 1 file changed, 178 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/debe3780/disk_balance_test.py -- diff --git a/disk_balance_test.py b/disk_balance_test.py index 1fb294b..da2930a 100644 --- a/disk_balance_test.py +++ b/disk_balance_test.py @@ -1,13 +1,15 @@ import os import os.path +import re -from dtest import DISABLE_VNODES, Tester, create_ks +from dtest import DISABLE_VNODES, Tester, create_ks, debug from tools.assertions import assert_almost_equal from tools.data import create_c1c2_table, insert_c1c2, query_c1c2 from tools.decorators import since from tools.jmxutils import (JolokiaAgent, make_mbean, remove_perf_disable_shared_mem) from tools.misc import new_node +from compaction_test import grep_sstables_in_each_level @since('3.2') @@ -114,3 +116,178 @@ class TestDiskBalance(Tester): sum = sum + os.path.getsize(sstable) sums.append(sum) assert_almost_equal(*sums, error=0.1, error_message=node.name) + +@since('3.10') +def disk_balance_after_boundary_change_stcs_test(self): +""" +@jira_ticket CASSANDRA-13948 +""" +self._disk_balance_after_boundary_change_test(lcs=False) + +@since('3.10') +def disk_balance_after_boundary_change_lcs_test(self): +""" +@jira_ticket CASSANDRA-13948 +""" +self._disk_balance_after_boundary_change_test(lcs=True) + +def _disk_balance_after_boundary_change_test(self, lcs): +""" +@jira_ticket CASSANDRA-13948 + +- Creates a 1 node cluster with 5 disks and insert data with compaction disabled +- Bootstrap a node2 to make disk boundary changes on node1 +- Enable compaction on node1 and check disks are balanced +- Decommission node1 to make disk boundary changes on node2 +- Enable compaction on node2 and check disks are balanced +""" + +cluster = self.cluster +if not DISABLE_VNODES: +cluster.set_configuration_options(values={'num_tokens': 1024}) +num_disks = 5 +cluster.set_datadir_count(num_disks) +cluster.set_configuration_options(values={'concurrent_compactors': num_disks}) + +debug("Starting node1 with {} data dirs and concurrent_compactors".format(num_disks)) +cluster.populate(1).start(wait_for_binary_proto=True) +[node1] = cluster.nodelist() + +session = self.patient_cql_connection(node1) +# reduce system_distributed RF to 1 so we don't require forceful decommission +session.execute("ALTER KEYSPACE system_distributed WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':'1'};") +session.execute("ALTER KEYSPACE system_traces WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':'1'};") + +num_flushes = 10 +keys_per_flush = 1 +keys_to_write = num_flushes * keys_per_flush + +compaction_opts = "LeveledCompactionStrategy,sstable_size_in_mb=1" if lcs else "SizeTieredCompactionStrategy" +debug("Writing {} keys in {} flushes (compaction_opts={})".format(keys_to_write, num_flushes, compaction_opts)) +total_keys = num_flushes * keys_per_flush +current_keys = 0 +while current_keys < total_keys: +start_key = current_keys + 1 +end_key = current_keys + keys_per_flush +debug("Writing keys {}..{} and flushing".format(start_key, end_key)) +node1.stress(['write', 'n={}'.format(keys_per_flush), "no-warmup", "cl=ALL", "-pop", + "seq={}..{}".format(start_key, end_key), "-rate", "threads=1", "-schema", "replication(factor=1)", + "compaction(strategy={},enabled=false)".format(compaction_opts)]) +node1.nodetool('flush keyspace1 standard1') +current_keys = end_key + +# Add a new node, so disk boundaries will change +
[2/2] cassandra-dtest git commit: Add test for disk balance during replace
Add test for disk balance during replace Project: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/commit/3d2a6cc7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/tree/3d2a6cc7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra-dtest/diff/3d2a6cc7 Branch: refs/heads/master Commit: 3d2a6cc738d87d30cca8d747305a5899ccf3712d Parents: debe378 Author: Paulo MottaAuthored: Wed Nov 29 08:34:40 2017 +1100 Committer: Paulo Motta Committed: Tue Dec 12 07:21:59 2017 +1100 -- disk_balance_test.py | 41 + 1 file changed, 41 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra-dtest/blob/3d2a6cc7/disk_balance_test.py -- diff --git a/disk_balance_test.py b/disk_balance_test.py index da2930a..9eed377 100644 --- a/disk_balance_test.py +++ b/disk_balance_test.py @@ -2,6 +2,7 @@ import os import os.path import re +from ccmlib.node import Node from dtest import DISABLE_VNODES, Tester, create_ks, debug from tools.assertions import assert_almost_equal from tools.data import create_c1c2_table, insert_c1c2, query_c1c2 @@ -46,6 +47,46 @@ class TestDiskBalance(Tester): node5.start(wait_for_binary_proto=True) self.assert_balanced(node5) + +def disk_balance_replace_same_address_test(self): +self._test_disk_balance_replace(same_address=True) + +def disk_balance_replace_different_address_test(self): +self._test_disk_balance_replace(same_address=False) + +def _test_disk_balance_replace(self, same_address): +debug("Creating cluster") +cluster = self.cluster +if not DISABLE_VNODES: +cluster.set_configuration_options(values={'num_tokens': 256}) +# apparently we have legitimate errors in the log when bootstrapping (see bootstrap_test.py) +self.allow_log_errors = True +cluster.populate(4).start(wait_for_binary_proto=True) +node1 = cluster.nodes['node1'] + +debug("Populating") +node1.stress(['write', 'n=50k', 'no-warmup', '-rate', 'threads=100', '-schema', 'replication(factor=3)', 'compaction(strategy=SizeTieredCompactionStrategy,enabled=false)']) +cluster.flush() + +debug("Stopping and removing node2") +node2 = cluster.nodes['node2'] +node2.stop(gently=False) +self.cluster.remove(node2) + +node5_address = node2.address() if same_address else '127.0.0.5' +debug("Starting replacement node") +node5 = Node('node5', cluster=self.cluster, auto_bootstrap=True, + thrift_interface=None, storage_interface=(node5_address, 7000), + jmx_port='7500', remote_debug_port='0', initial_token=None, + binary_interface=(node5_address, 9042)) +self.cluster.add(node5, False) + node5.start(jvm_args=["-Dcassandra.replace_address_first_boot={}".format(node2.address())], +wait_for_binary_proto=True, +wait_other_notice=True) + +debug("Checking replacement node is balanced") +self.assert_balanced(node5) + def disk_balance_decommission_test(self): cluster = self.cluster if not DISABLE_VNODES: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14008) RTs at index boundaries in 2.x sstables can create unexpected CQL row in 3.x
[ https://issues.apache.org/jira/browse/CASSANDRA-14008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286449#comment-16286449 ] Jeff Jirsa commented on CASSANDRA-14008: [~iamaleksey] can you check both branches for a new regression test please? Exact commits are https://github.com/jeffjirsa/cassandra/commit/eff1f18fcd80b4860bb5812142d196e94b6ae2a1 and https://github.com/jeffjirsa/cassandra/commit/a85befc4fc3c0e44f6751a3e6472afecaefc4b77 > RTs at index boundaries in 2.x sstables can create unexpected CQL row in 3.x > > > Key: CASSANDRA-14008 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14008 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths >Reporter: Jeff Jirsa >Assignee: Jeff Jirsa > Labels: correctness > Fix For: 3.0.16, 3.11.2 > > > In 2.1/2.2, it is possible for a range tombstone that isn't a row deletion > and isn't a complex deletion to appear between two cells with the same > clustering. The 8099 legacy code incorrectly treats the two (non-RT) cells as > two distinct CQL rows, despite having the same clustering prefix. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286459#comment-16286459 ] Jeff Jirsa commented on CASSANDRA-14105: +1 > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14105) Trivial log format error
[ https://issues.apache.org/jira/browse/CASSANDRA-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-14105: --- Reviewer: Jeff Jirsa > Trivial log format error > > > Key: CASSANDRA-14105 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14105 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > > The same issue as CASSANDRA-13551 > The "{}" is not needed for: {{log.error(String, Throwable)}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
[ https://issues.apache.org/jira/browse/CASSANDRA-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-14104: Status: Patch Available (was: Open) > Index target doesn't correctly recognise non-UTF column names after COMPACT > STORAGE drop > > > Key: CASSANDRA-14104 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > > Creating a compact storage table with dynamic composite type, then running > {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will > crash Cassandra node, since the Index Target is fetched using hashmap / > strict equality. We need to fall back to linear search when index target > can't be found (which should not be happening often). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
[ https://issues.apache.org/jira/browse/CASSANDRA-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286383#comment-16286383 ] Alex Petrov edited comment on CASSANDRA-14104 at 12/11/17 6:29 PM: --- Patch: |[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...ifesdjeen:14104-3.0]|[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...ifesdjeen:14104-3.11]|[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:14104-trunk]| was (Author: ifesdjeen): Patch: |[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...ifesdjeen:14104-3.0]|[3.0|https://github.com/apache/cassandra/compare/cassandra-3.11...ifesdjeen:14104-3.11]|[3.0|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:14104-trunk]| > Index target doesn't correctly recognise non-UTF column names after COMPACT > STORAGE drop > > > Key: CASSANDRA-14104 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > > Creating a compact storage table with dynamic composite type, then running > {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will > crash Cassandra node, since the Index Target is fetched using hashmap / > strict equality. We need to fall back to linear search when index target > can't be found (which should not be happening often). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
[ https://issues.apache.org/jira/browse/CASSANDRA-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286383#comment-16286383 ] Alex Petrov commented on CASSANDRA-14104: - Patch: |[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...ifesdjeen:14104-3.0]|[3.0|https://github.com/apache/cassandra/compare/cassandra-3.11...ifesdjeen:14104-3.11]|[3.0|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:14104-trunk]| > Index target doesn't correctly recognise non-UTF column names after COMPACT > STORAGE drop > > > Key: CASSANDRA-14104 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > > Creating a compact storage table with dynamic composite type, then running > {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will > crash Cassandra node, since the Index Target is fetched using hashmap / > strict equality. We need to fall back to linear search when index target > can't be found (which should not be happening often). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14061) trunk eclipse-warnings
[ https://issues.apache.org/jira/browse/CASSANDRA-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286369#comment-16286369 ] Jay Zhuang commented on CASSANDRA-14061: [~spo...@gmail.com] For adding {{iterator.close();}} I don't know. Anyway, it's doing nothing: [AbstractIterator.java:86|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/AbstractIterator.java#L86] and won't fix the warning. What do you think? > trunk eclipse-warnings > -- > > Key: CASSANDRA-14061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14061 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > > {noformat} > eclipse-warnings: > [mkdir] Created dir: /home/ubuntu/cassandra/build/ecj > [echo] Running Eclipse Code Analysis. Output logged to > /home/ubuntu/cassandra/build/ecj/eclipse_compiler_checks.txt > [java] -- > [java] 1. ERROR in > /home/ubuntu/cassandra/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java > (at line 59) > [java] return new SSTableIdentityIterator(sstable, key, > partitionLevelDeletion, file.getPath(), iterator); > [java] > ^^^ > [java] Potential resource leak: 'iterator' may not be closed at this > location > [java] -- > [java] 2. ERROR in > /home/ubuntu/cassandra/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java > (at line 79) > [java] return new SSTableIdentityIterator(sstable, key, > partitionLevelDeletion, dfile.getPath(), iterator); > [java] > > [java] Potential resource leak: 'iterator' may not be closed at this > location > [java] -- > [java] 2 problems (2 errors) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
[ https://issues.apache.org/jira/browse/CASSANDRA-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-14104: Reviewer: ZhaoYang Description: Creating a compact storage table with dynamic composite type, then running {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will crash Cassandra node, since the Index Target is fetched using hashmap / strict equality. We need to fall back to linear search when index target can't be found (which should not be happening often). (was: Creating a compact storage table with dynamic composite type, then running {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will crash Cassandra node.) > Index target doesn't correctly recognise non-UTF column names after COMPACT > STORAGE drop > > > Key: CASSANDRA-14104 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > > Creating a compact storage table with dynamic composite type, then running > {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will > crash Cassandra node, since the Index Target is fetched using hashmap / > strict equality. We need to fall back to linear search when index target > can't be found (which should not be happening often). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14104) Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop
Alex Petrov created CASSANDRA-14104: --- Summary: Index target doesn't correctly recognise non-UTF column names after COMPACT STORAGE drop Key: CASSANDRA-14104 URL: https://issues.apache.org/jira/browse/CASSANDRA-14104 Project: Cassandra Issue Type: Bug Reporter: Alex Petrov Assignee: Alex Petrov Creating a compact storage table with dynamic composite type, then running {{ALTER TALBE ... DROP COMPACT STORAGE}} and then restarting the node will crash Cassa -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-14067) Change default for SSL algorithm
[ https://issues.apache.org/jira/browse/CASSANDRA-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski resolved CASSANDRA-14067. Resolution: Duplicate > Change default for SSL algorithm > > > Key: CASSANDRA-14067 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14067 > Project: Cassandra > Issue Type: Bug >Reporter: Stefan Podkowinski >Assignee: Stefan Podkowinski > Labels: security > Fix For: 4.x > > > The hardcoded default for the SSL validation algorithm should be changed from > SunX509 to PKIX, which has been [default since Java > 7|https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#SupportClasses]. > Starting with Java 9, the use of SunX509 is [actively > discouraged|https://bugs.openjdk.java.net/browse/JDK-8169745], as it > implements fewer security constraints. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14067) Change default for SSL algorithm
[ https://issues.apache.org/jira/browse/CASSANDRA-14067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-14067: --- Status: Open (was: Patch Available) > Change default for SSL algorithm > > > Key: CASSANDRA-14067 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14067 > Project: Cassandra > Issue Type: Bug >Reporter: Stefan Podkowinski >Assignee: Stefan Podkowinski > Labels: security > Fix For: 4.x > > > The hardcoded default for the SSL validation algorithm should be changed from > SunX509 to PKIX, which has been [default since Java > 7|https://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html#SupportClasses]. > Starting with Java 9, the use of SunX509 is [actively > discouraged|https://bugs.openjdk.java.net/browse/JDK-8169745], as it > implements fewer security constraints. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14061) trunk eclipse-warnings
[ https://issues.apache.org/jira/browse/CASSANDRA-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286050#comment-16286050 ] Stefan Podkowinski commented on CASSANDRA-14061: [~jay.zhuang], do you plan to change your patch or keep it as is at this point? > trunk eclipse-warnings > -- > > Key: CASSANDRA-14061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14061 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > > {noformat} > eclipse-warnings: > [mkdir] Created dir: /home/ubuntu/cassandra/build/ecj > [echo] Running Eclipse Code Analysis. Output logged to > /home/ubuntu/cassandra/build/ecj/eclipse_compiler_checks.txt > [java] -- > [java] 1. ERROR in > /home/ubuntu/cassandra/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java > (at line 59) > [java] return new SSTableIdentityIterator(sstable, key, > partitionLevelDeletion, file.getPath(), iterator); > [java] > ^^^ > [java] Potential resource leak: 'iterator' may not be closed at this > location > [java] -- > [java] 2. ERROR in > /home/ubuntu/cassandra/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java > (at line 79) > [java] return new SSTableIdentityIterator(sstable, key, > partitionLevelDeletion, dfile.getPath(), iterator); > [java] > > [java] Potential resource leak: 'iterator' may not be closed at this > location > [java] -- > [java] 2 problems (2 errors) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory
[ https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286047#comment-16286047 ] Michael Shuler commented on CASSANDRA-13929: I set the fix version to {{3.11.x}} to indicate this is intended for the 3.11 series for you. > BTree$Builder / io.netty.util.Recycler$Stack leaking memory > --- > > Key: CASSANDRA-13929 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13929 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Thomas Steinmaurer > Fix For: 3.11.x > > Attachments: cassandra_3.11.0_min_memory_utilization.jpg, > cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, > cassandra_3.11.1_mat_dominator_classes.png, > cassandra_3.11.1_mat_dominator_classes_FIXED.png, > cassandra_3.11.1_snapshot_heaputilization.png > > > Different to CASSANDRA-13754, there seems to be another memory leak in > 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack. > * heap utilization increase after upgrading to 3.11.0 => > cassandra_3.11.0_min_memory_utilization.jpg > * No difference after upgrading to 3.11.1 (snapshot build) => > cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing > CASSANDRA-13754, more visible now > * MAT shows io.netty.util.Recycler$Stack as top contributing class => > cassandra_3.11.1_mat_dominator_classes.png > * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart > after ~ 72 hours > Verified the following fix, namely explicitly unreferencing the > _recycleHandle_ member (making it non-final). In > _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_ > {code} > public void recycle() > { > if (recycleHandle != null) > { > this.cleanup(); > builderRecycler.recycle(this, recycleHandle); > recycleHandle = null; // ADDED > } > } > {code} > Patched a single node in our loadtest cluster with this change and after ~ 10 > hours uptime, no sign of the previously offending class in MAT anymore => > cassandra_3.11.1_mat_dominator_classes_FIXED.png > Can' say if this has any other side effects etc., but I doubt. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory
[ https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13929: --- Fix Version/s: 3.11.x > BTree$Builder / io.netty.util.Recycler$Stack leaking memory > --- > > Key: CASSANDRA-13929 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13929 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Thomas Steinmaurer > Fix For: 3.11.x > > Attachments: cassandra_3.11.0_min_memory_utilization.jpg, > cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, > cassandra_3.11.1_mat_dominator_classes.png, > cassandra_3.11.1_mat_dominator_classes_FIXED.png, > cassandra_3.11.1_snapshot_heaputilization.png > > > Different to CASSANDRA-13754, there seems to be another memory leak in > 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack. > * heap utilization increase after upgrading to 3.11.0 => > cassandra_3.11.0_min_memory_utilization.jpg > * No difference after upgrading to 3.11.1 (snapshot build) => > cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing > CASSANDRA-13754, more visible now > * MAT shows io.netty.util.Recycler$Stack as top contributing class => > cassandra_3.11.1_mat_dominator_classes.png > * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart > after ~ 72 hours > Verified the following fix, namely explicitly unreferencing the > _recycleHandle_ member (making it non-final). In > _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_ > {code} > public void recycle() > { > if (recycleHandle != null) > { > this.cleanup(); > builderRecycler.recycle(this, recycleHandle); > recycleHandle = null; // ADDED > } > } > {code} > Patched a single node in our loadtest cluster with this change and after ~ 10 > hours uptime, no sign of the previously offending class in MAT anymore => > cassandra_3.11.1_mat_dominator_classes_FIXED.png > Can' say if this has any other side effects etc., but I doubt. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory
[ https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286044#comment-16286044 ] Michael Shuler commented on CASSANDRA-13929: You may have some better success getting eyes on this JIRA by attaching a proper patch for the 3.11 branch (and a separate trunk patch, if merge up isn't clean). You can do this with a simple text file attachment of the diff, or linking to a github branch. Bonus points for adding a test that reproduces the problem/fix, as well as getting test run through circleci. This JIRA can be assigned to yourself as the author and you can set the status to "Patch Available" when there's an actual patch/branch to review. Then you can possibly seek out a reviewer on the dev@ mailing list, if it's urgent. It looks like there are currently 115 Patch Available tickets, so it may still take some time, but getting this set up for someone else to review would be a great step in getting the process rolling further than just a comment. https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20status%20%3D%20%22Patch%20Available%22 > BTree$Builder / io.netty.util.Recycler$Stack leaking memory > --- > > Key: CASSANDRA-13929 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13929 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Thomas Steinmaurer > Attachments: cassandra_3.11.0_min_memory_utilization.jpg, > cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, > cassandra_3.11.1_mat_dominator_classes.png, > cassandra_3.11.1_mat_dominator_classes_FIXED.png, > cassandra_3.11.1_snapshot_heaputilization.png > > > Different to CASSANDRA-13754, there seems to be another memory leak in > 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack. > * heap utilization increase after upgrading to 3.11.0 => > cassandra_3.11.0_min_memory_utilization.jpg > * No difference after upgrading to 3.11.1 (snapshot build) => > cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing > CASSANDRA-13754, more visible now > * MAT shows io.netty.util.Recycler$Stack as top contributing class => > cassandra_3.11.1_mat_dominator_classes.png > * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart > after ~ 72 hours > Verified the following fix, namely explicitly unreferencing the > _recycleHandle_ member (making it non-final). In > _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_ > {code} > public void recycle() > { > if (recycleHandle != null) > { > this.cleanup(); > builderRecycler.recycle(this, recycleHandle); > recycleHandle = null; // ADDED > } > } > {code} > Patched a single node in our loadtest cluster with this change and after ~ 10 > hours uptime, no sign of the previously offending class in MAT anymore => > cassandra_3.11.1_mat_dominator_classes_FIXED.png > Can' say if this has any other side effects etc., but I doubt. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14090) stress.generate.Distribution.average broken on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-14090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285939#comment-16285939 ] ZhaoYang commented on CASSANDRA-14090: -- [~bdeggleston] bq. tools/bin/cassandra-stress user profile=tools/cqlstress-example.yaml ops(insert=1) should we also change the following line to {{50}} ? {code} return (long) (sum / 51); {code} > stress.generate.Distribution.average broken on trunk > > > Key: CASSANDRA-14090 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14090 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > Looks like the lgtm.com fixes slightly changed the behavior of > Distribution.average, which prevents stress from starting up -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13929) BTree$Builder / io.netty.util.Recycler$Stack leaking memory
[ https://issues.apache.org/jira/browse/CASSANDRA-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285908#comment-16285908 ] Thomas Steinmaurer commented on CASSANDRA-13929: Yet another ping after 2 months of silence and the issue still being unassigned. Is this something which will be handled in the 3.11 series? Thanks! > BTree$Builder / io.netty.util.Recycler$Stack leaking memory > --- > > Key: CASSANDRA-13929 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13929 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Thomas Steinmaurer > Attachments: cassandra_3.11.0_min_memory_utilization.jpg, > cassandra_3.11.1_NORECYCLE_memory_utilization.jpg, > cassandra_3.11.1_mat_dominator_classes.png, > cassandra_3.11.1_mat_dominator_classes_FIXED.png, > cassandra_3.11.1_snapshot_heaputilization.png > > > Different to CASSANDRA-13754, there seems to be another memory leak in > 3.11.0+ in BTree$Builder / io.netty.util.Recycler$Stack. > * heap utilization increase after upgrading to 3.11.0 => > cassandra_3.11.0_min_memory_utilization.jpg > * No difference after upgrading to 3.11.1 (snapshot build) => > cassandra_3.11.1_snapshot_heaputilization.png; thus most likely after fixing > CASSANDRA-13754, more visible now > * MAT shows io.netty.util.Recycler$Stack as top contributing class => > cassandra_3.11.1_mat_dominator_classes.png > * With -Xmx8G (CMS) and our load pattern, we have to do a rolling restart > after ~ 72 hours > Verified the following fix, namely explicitly unreferencing the > _recycleHandle_ member (making it non-final). In > _org.apache.cassandra.utils.btree.BTree.Builder.recycle()_ > {code} > public void recycle() > { > if (recycleHandle != null) > { > this.cleanup(); > builderRecycler.recycle(this, recycleHandle); > recycleHandle = null; // ADDED > } > } > {code} > Patched a single node in our loadtest cluster with this change and after ~ 10 > hours uptime, no sign of the previously offending class in MAT anymore => > cassandra_3.11.1_mat_dominator_classes_FIXED.png > Can' say if this has any other side effects etc., but I doubt. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13971) Automatic certificate management using Vault
[ https://issues.apache.org/jira/browse/CASSANDRA-13971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-13971: --- Status: Patch Available (was: In Progress) Test runs: * [circleci|https://circleci.com/gh/spodkowinski/cassandra/tree/WIP-13971] * [dtest|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/458/] {quote} My only minor concern here is Vault is MPL, and while I think that is fine for the ASF as MPL is category-B, let's research it more. Admittedly I just did the basic research to see if it's category-X, didn't follow through all the way. {quote} I doubt that this will be an issue since the binary is downloaded and forked/execed from the script and not included directly as part of the dtest project. But I can open a LEGAL ticket if you think this needs further clarification. > Automatic certificate management using Vault > > > Key: CASSANDRA-13971 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13971 > Project: Cassandra > Issue Type: Improvement > Components: Streaming and Messaging >Reporter: Stefan Podkowinski >Assignee: Stefan Podkowinski > Fix For: 4.x > > > We've been adding security features during the last years to enable users to > secure their clusters, if they are willing to use them and do so correctly. > Some features are powerful and easy to work with, such as role based > authorization. Other features that require to manage a local keystore are > rather painful to deal with. Think about setting up SSL.. > To be fair, keystore related issues and certificate handling hasn't been > invented by us. We're just following Java standards there. But that doesn't > mean that we absolutely have to, if there are better options. I'd like to > give it a shoot and find out if we can automate certificate/key handling > (PKI) by using external APIs. In this case, the implementation will be based > on [Vault|https://vaultproject.io]. But certificate management services > offered by cloud providers may also be able to handle the use-case and I > intend to create a generic, pluggable API for that. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14084) Disks can be imbalanced during replace of same address when using JBOD
[ https://issues.apache.org/jira/browse/CASSANDRA-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-14084: Status: Ready to Commit (was: Patch Available) > Disks can be imbalanced during replace of same address when using JBOD > -- > > Key: CASSANDRA-14084 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14084 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > Attachments: dtest14084.png > > > While investigating CASSANDRA-14083, I noticed that [we use the pending > ranges to calculate the disk > boundaries|https://github.com/apache/cassandra/blob/41904684bb5509595d11f008d0851c7ce625e020/src/java/org/apache/cassandra/db/DiskBoundaryManager.java#L91] > when the node is bootstrapping. > The problem is that when the node is replacing a node with the same address, > it [sets itself as normal > locally|https://github.com/apache/cassandra/blob/41904684bb5509595d11f008d0851c7ce625e020/src/java/org/apache/cassandra/service/StorageService.java#L1449] > (for other unrelated reasons), so the local ranges will be null and > consequently the disk boundaries will be null. This will cause the sstables > to be randomly spread across disks potentially causing imbalance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14084) Disks can be imbalanced during replace of same address when using JBOD
[ https://issues.apache.org/jira/browse/CASSANDRA-14084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285628#comment-16285628 ] Marcus Eriksson commented on CASSANDRA-14084: - +1 > Disks can be imbalanced during replace of same address when using JBOD > -- > > Key: CASSANDRA-14084 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14084 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > Attachments: dtest14084.png > > > While investigating CASSANDRA-14083, I noticed that [we use the pending > ranges to calculate the disk > boundaries|https://github.com/apache/cassandra/blob/41904684bb5509595d11f008d0851c7ce625e020/src/java/org/apache/cassandra/db/DiskBoundaryManager.java#L91] > when the node is bootstrapping. > The problem is that when the node is replacing a node with the same address, > it [sets itself as normal > locally|https://github.com/apache/cassandra/blob/41904684bb5509595d11f008d0851c7ce625e020/src/java/org/apache/cassandra/service/StorageService.java#L1449] > (for other unrelated reasons), so the local ranges will be null and > consequently the disk boundaries will be null. This will cause the sstables > to be randomly spread across disks potentially causing imbalance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13873) Ref bug in Scrub
[ https://issues.apache.org/jira/browse/CASSANDRA-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-13873: Resolution: Fixed Fix Version/s: (was: 3.11.x) (was: 4.x) (was: 3.0.x) (was: 2.2.x) 4.0 3.11.2 3.0.16 2.2.12 Reproduced In: 3.11.0, 3.10, 4.0 (was: 3.10, 3.11.0, 4.0) Status: Resolved (was: Ready to Commit) and committed as {{3cd2c3c4ea4286562b2cb8443d6173ee251e6212}}, thanks > Ref bug in Scrub > > > Key: CASSANDRA-13873 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13873 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: T Jake Luciani >Assignee: Marcus Eriksson > Fix For: 2.2.12, 3.0.16, 3.11.2, 4.0 > > > I'm hitting a Ref bug when many scrubs run against a node. This doesn't > happen on 3.0.X. I'm not sure if/if not this happens with compactions too > but I suspect it does. > I'm not seeing any Ref leaks or double frees. > To Reproduce: > {quote} > ./tools/bin/cassandra-stress write n=10m -rate threads=100 > ./bin/nodetool scrub > #Ctrl-C > ./bin/nodetool scrub > #Ctrl-C > ./bin/nodetool scrub > #Ctrl-C > ./bin/nodetool scrub > {quote} > Eventually in the logs you get: > WARN [RMI TCP Connection(4)-127.0.0.1] 2017-09-14 15:51:26,722 > NoSpamLogger.java:97 - Spinning trying to capture readers > [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db'), > > BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-32-big-Data.db'), > > BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-31-big-Data.db'), > > BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-29-big-Data.db'), > > BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-27-big-Data.db'), > > BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-26-big-Data.db'), > > BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-20-big-Data.db')], > *released: > [BigTableReader(path='/home/jake/workspace/cassandra2/data/data/keyspace1/standard1-2eb5c780998311e79e09311efffdcd17/mc-5-big-Data.db')],* > > This released table has a selfRef of 0 but is in the Tracker -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[02/10] cassandra git commit: Grab refs during scrub, index summary redistribution and cleanup
Grab refs during scrub, index summary redistribution and cleanup Patch by marcuse; reviewed by Joel Knighton for CASSANDRA-13873 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3cd2c3c4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3cd2c3c4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3cd2c3c4 Branch: refs/heads/cassandra-3.0 Commit: 3cd2c3c4ea4286562b2cb8443d6173ee251e6212 Parents: 797de4a Author: Marcus ErikssonAuthored: Mon Oct 23 09:43:44 2017 +0200 Committer: Marcus Eriksson Committed: Mon Dec 11 08:53:44 2017 +0100 -- CHANGES.txt | 2 +- .../cassandra/db/compaction/CompactionManager.java | 3 ++- .../org/apache/cassandra/db/compaction/Scrubber.java| 4 +++- .../io/sstable/IndexSummaryRedistribution.java | 12 4 files changed, 14 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 752cbdc..c1e81fd 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,5 +1,5 @@ 2.2.12 - * + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873) 2.2.11 * Safely handle empty buffers when outputting to JSON (CASSANDRA-13868) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index cd50646..2e69b6f 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -856,7 +856,8 @@ public class CompactionManager implements CompactionManagerMBean metrics.beginCompaction(ci); List finished; try (SSTableRewriter writer = new SSTableRewriter(cfs, txn, sstable.maxDataAge, false); - CompactionController controller = new CompactionController(cfs, txn.originals(), getDefaultGcBefore(cfs))) + CompactionController controller = new CompactionController(cfs, txn.originals(), getDefaultGcBefore(cfs)); + Refs refs = Refs.ref(Collections.singleton(sstable))) { writer.switchWriter(createWriter(cfs, compactionFileLocation, expectedBloomFilterSize, sstable.getSSTableMetadata().repairedAt, sstable)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/src/java/org/apache/cassandra/db/compaction/Scrubber.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/Scrubber.java b/src/java/org/apache/cassandra/db/compaction/Scrubber.java index aaed234..b6b20fb 100644 --- a/src/java/org/apache/cassandra/db/compaction/Scrubber.java +++ b/src/java/org/apache/cassandra/db/compaction/Scrubber.java @@ -40,6 +40,7 @@ import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.JVMStabilityInspector; import org.apache.cassandra.utils.OutputHandler; import org.apache.cassandra.utils.UUIDGen; +import org.apache.cassandra.utils.concurrent.Refs; public class Scrubber implements Closeable { @@ -142,7 +143,8 @@ public class Scrubber implements Closeable public void scrub() { outputHandler.output(String.format("Scrubbing %s (%s bytes)", sstable, dataFile.length())); -try (SSTableRewriter writer = new SSTableRewriter(cfs, transaction, sstable.maxDataAge, transaction.isOffline())) +try (SSTableRewriter writer = new SSTableRewriter(cfs, transaction, sstable.maxDataAge, transaction.isOffline()); + Refs refs = Refs.ref(Collections.singleton(sstable))) { nextIndexKey = indexAvailable() ? ByteBufferUtil.readWithShortLength(indexFile) : null; if (indexAvailable()) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java b/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java index aad479b..12586e5 100644 --- a/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java +++ b/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java @@ -41,6 +41,7 @@ import org.apache.cassandra.db.compaction.OperationType; import
[01/10] cassandra git commit: Grab refs during scrub, index summary redistribution and cleanup
Repository: cassandra Updated Branches: refs/heads/cassandra-2.2 797de4ae3 -> 3cd2c3c4e refs/heads/cassandra-3.0 a9225f90e -> d7329a639 refs/heads/cassandra-3.11 16bcbb925 -> 817f3c282 refs/heads/trunk 78150142e -> 1cb050922 Grab refs during scrub, index summary redistribution and cleanup Patch by marcuse; reviewed by Joel Knighton for CASSANDRA-13873 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3cd2c3c4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3cd2c3c4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3cd2c3c4 Branch: refs/heads/cassandra-2.2 Commit: 3cd2c3c4ea4286562b2cb8443d6173ee251e6212 Parents: 797de4a Author: Marcus ErikssonAuthored: Mon Oct 23 09:43:44 2017 +0200 Committer: Marcus Eriksson Committed: Mon Dec 11 08:53:44 2017 +0100 -- CHANGES.txt | 2 +- .../cassandra/db/compaction/CompactionManager.java | 3 ++- .../org/apache/cassandra/db/compaction/Scrubber.java| 4 +++- .../io/sstable/IndexSummaryRedistribution.java | 12 4 files changed, 14 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 752cbdc..c1e81fd 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,5 +1,5 @@ 2.2.12 - * + * Grab refs during scrub/index redistribution/cleanup (CASSANDRA-13873) 2.2.11 * Safely handle empty buffers when outputting to JSON (CASSANDRA-13868) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index cd50646..2e69b6f 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -856,7 +856,8 @@ public class CompactionManager implements CompactionManagerMBean metrics.beginCompaction(ci); List finished; try (SSTableRewriter writer = new SSTableRewriter(cfs, txn, sstable.maxDataAge, false); - CompactionController controller = new CompactionController(cfs, txn.originals(), getDefaultGcBefore(cfs))) + CompactionController controller = new CompactionController(cfs, txn.originals(), getDefaultGcBefore(cfs)); + Refs refs = Refs.ref(Collections.singleton(sstable))) { writer.switchWriter(createWriter(cfs, compactionFileLocation, expectedBloomFilterSize, sstable.getSSTableMetadata().repairedAt, sstable)); http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/src/java/org/apache/cassandra/db/compaction/Scrubber.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/Scrubber.java b/src/java/org/apache/cassandra/db/compaction/Scrubber.java index aaed234..b6b20fb 100644 --- a/src/java/org/apache/cassandra/db/compaction/Scrubber.java +++ b/src/java/org/apache/cassandra/db/compaction/Scrubber.java @@ -40,6 +40,7 @@ import org.apache.cassandra.utils.ByteBufferUtil; import org.apache.cassandra.utils.JVMStabilityInspector; import org.apache.cassandra.utils.OutputHandler; import org.apache.cassandra.utils.UUIDGen; +import org.apache.cassandra.utils.concurrent.Refs; public class Scrubber implements Closeable { @@ -142,7 +143,8 @@ public class Scrubber implements Closeable public void scrub() { outputHandler.output(String.format("Scrubbing %s (%s bytes)", sstable, dataFile.length())); -try (SSTableRewriter writer = new SSTableRewriter(cfs, transaction, sstable.maxDataAge, transaction.isOffline())) +try (SSTableRewriter writer = new SSTableRewriter(cfs, transaction, sstable.maxDataAge, transaction.isOffline()); + Refs refs = Refs.ref(Collections.singleton(sstable))) { nextIndexKey = indexAvailable() ? ByteBufferUtil.readWithShortLength(indexFile) : null; if (indexAvailable()) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3cd2c3c4/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java b/src/java/org/apache/cassandra/io/sstable/IndexSummaryRedistribution.java index aad479b..12586e5 100644 ---