[jira] [Updated] (CASSANDRA-15208) Listing the same data directory multiple times can result in an java.lang.AssertionError: null on startup
[ https://issues.apache.org/jira/browse/CASSANDRA-15208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-15208: - Description: Listing the same data directory multiple times in the yaml can result in an java.lang.AssertionError: null on startup. This error will only happen if Cassandra was stopped part way through an sstable operation (i.e a compaction) and is restarted Error: {noformat} Exception (java.lang.AssertionError) encountered during startup: null java.lang.AssertionError at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplica(LogReplicaSet.java:63) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplicas(LogReplicaSet.java:57) at org.apache.cassandra.db.lifecycle.LogFile.(LogFile.java:147) at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:95) at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:476) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1717) at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230) at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454) at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:471) at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:438) at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:430) at org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:549) at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:658) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:275) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732) ERROR o.a.c.service.CassandraDaemon Exception encountered during startup java.lang.AssertionError: null at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplica(LogReplicaSet.java:63) ~[apache-cassandra-3.11.4.jar:3.11.4] at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_171] at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplicas(LogReplicaSet.java:57) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogFile.(LogFile.java:147) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:95) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:476) ~[apache-cassandra-3.11.4.jar:3.11> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[na:1.8.0_171] at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1717) ~[na:1.8.0_171] at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[na:1.8.0_171] at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230) ~[na:1.8.0_171] at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_171] at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454) ~[na:1.8.0_171] at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:471) ~[apache-cassandra-3.11.4.jar:3.11> at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:438) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:430) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:549) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDir
[jira] [Created] (CASSANDRA-15208) Listing the same data directory multiple times can result in an java.lang.AssertionError: null on startup
Damien Stevenson created CASSANDRA-15208: Summary: Listing the same data directory multiple times can result in an java.lang.AssertionError: null on startup Key: CASSANDRA-15208 URL: https://issues.apache.org/jira/browse/CASSANDRA-15208 Project: Cassandra Issue Type: Bug Components: Local/Config Reporter: Damien Stevenson Listing the same data directory multiple times in the yaml can result in an java.lang.AssertionError: null on startup. This error will only happen if Cassandra was stopped part way through an sstable operation (i.e a compaction) and is restarted Error: {noformat} Exception (java.lang.AssertionError) encountered during startup: null java.lang.AssertionError at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplica(LogReplicaSet.java:63) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplicas(LogReplicaSet.java:57) at org.apache.cassandra.db.lifecycle.LogFile.(LogFile.java:147) at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:95) at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:476) at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1717) at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230) at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454) at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:471) at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:438) at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:430) at org.apache.cassandra.db.lifecycle.LifecycleTransaction.removeUnfinishedLeftovers(LifecycleTransaction.java:549) at org.apache.cassandra.db.ColumnFamilyStore.scrubDataDirectories(ColumnFamilyStore.java:658) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:275) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:620) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:732) ERROR o.a.c.service.CassandraDaemon Exception encountered during startup java.lang.AssertionError: null at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplica(LogReplicaSet.java:63) ~[apache-cassandra-3.11.4.jar:3.11.4] at java.util.ArrayList.forEach(ArrayList.java:1257) ~[na:1.8.0_171] at org.apache.cassandra.db.lifecycle.LogReplicaSet.addReplicas(LogReplicaSet.java:57) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogFile.(LogFile.java:147) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogFile.make(LogFile.java:95) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:476) ~[apache-cassandra-3.11.4.jar:3.11> at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) ~[na:1.8.0_171] at java.util.HashMap$EntrySpliterator.tryAdvance(HashMap.java:1717) ~[na:1.8.0_171] at java.util.stream.ReferencePipeline.forEachWithCancel(ReferencePipeline.java:126) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.copyIntoWithCancel(AbstractPipeline.java:498) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:485) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[na:1.8.0_171] at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:230) ~[na:1.8.0_171] at java.util.stream.MatchOps$MatchOp.evaluateSequential(MatchOps.java:196) ~[na:1.8.0_171] at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) ~[na:1.8.0_171] at java.util.stream.ReferencePipeline.allMatch(ReferencePipeline.java:454) ~[na:1.8.0_171] at org.apache.cassandra.db.lifecycle.LogTransaction$LogFilesByName.removeUnfinishedLeftovers(LogTransaction.java:471) ~[apache-cassandra-3.11.4.jar:3.11> at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:438) ~[apache-cassandra-3.11.4.jar:3.11.4] at org.apache.cassandra.db.lifecycle.LogTransaction.removeUnfinishedLeftovers(LogTransaction.java:430) ~[apache-cassandra-3
[jira] [Created] (CASSANDRA-14721) sstabledump displays incorrect value for "position" key
Damien Stevenson created CASSANDRA-14721: Summary: sstabledump displays incorrect value for "position" key Key: CASSANDRA-14721 URL: https://issues.apache.org/jira/browse/CASSANDRA-14721 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Damien Stevenson When partitions with multiple rows are displayed using sstabledump, the "position" value the first row of each partition is incorrect. For example: {code:java} sstabledump mc-1-big-Data.db [ { "partition" : { "key" : [ "1", "24" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 66, "clustering" : [ "2013-12-10 00:00:00.000Z" ], "liveness_info" : { "tstamp" : "2018-09-12T05:01:09.290086Z" }, "cells" : [ { "name" : "centigrade", "value" : 8 }, { "name" : "chanceofrain", "value" : 0.1 }, { "name" : "feelslike", "value" : 8 }, { "name" : "humidity", "value" : 0.76 }, { "name" : "wind", "value" : 10.0 } ] }, { "type" : "row", "position" : 66, "clustering" : [ "2013-12-11 00:00:00.000Z" ], "liveness_info" : { "tstamp" : "2018-09-12T05:01:09.295369Z" }, "cells" : [ { "name" : "centigrade", "value" : 4 }, { "name" : "chanceofrain", "value" : 0.3 }, { "name" : "feelslike", "value" : 4 }, { "name" : "humidity", "value" : 0.9 }, { "name" : "wind", "value" : 12.0 } ] }, { "type" : "row", "position" : 105, "clustering" : [ "2013-12-12 00:00:00.000Z" ], "liveness_info" : { "tstamp" : "2018-09-12T05:01:09.300841Z" }, "cells" : [ { "name" : "centigrade", "value" : 3 }, { "name" : "chanceofrain", "value" : 0.2 }, { "name" : "feelslike", "value" : 3 }, { "name" : "humidity", "value" : 0.68 }, { "name" : "wind", "value" : 6.0 } ] } ] } ] {code} The expected output is: {code:java} [ { "partition" : { "key" : [ "1", "24" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 28, "clustering" : [ "2013-12-10 00:00:00.000Z" ], "liveness_info" : { "tstamp" : "2018-09-12T05:01:09.290086Z" }, "cells" : [ { "name" : "centigrade", "value" : 8 }, { "name" : "chanceofrain", "value" : 0.1 }, { "name" : "feelslike", "value" : 8 }, { "name" : "humidity", "value" : 0.76 }, { "name" : "wind", "value" : 10.0 } ] }, { "type" : "row", "position" : 66, "clustering" : [ "2013-12-11 00:00:00.000Z" ], "liveness_info" : { "tstamp" : "2018-09-12T05:01:09.295369Z" }, "cells" : [ { "name" : "centigrade", "value" : 4 }, { "name" : "chanceofrain", "value" : 0.3 }, { "name" : "feelslike", "value" : 4 }, { "name" : "humidity", "value" : 0.9 }, { "name" : "wind", "value" : 12.0 } ] }, { "type" : "row", "position" : 105, "clustering" : [ "2013-12-12 00:00:00.000Z" ], "liveness_info" : { "tstamp" : "2018-09-12T05:01:09.300841Z" }, "cells" : [ { "name" : "centigrade", "value" : 3 }, { "name" : "chanceofrain", "value" : 0.2 }, { "name" : "feelslike", "value" : 3 }, { "name" : "humidity", "value" : 0.68 }, { "name" : "wind", "value" : 6.0 } ] } ] } ] {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-11469) dtest failure in upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480197#comment-16480197 ] Damien Stevenson commented on CASSANDRA-11469: -- I have investigated this and believe this is no logger an issue. After running dtest many times and I haven't come across the exceptions that were mentioned in the description. A patch similar to what [~beobal] described above was submitted in CASSANDRA-12813 and appears to have resolved the above failures. However, I am getting the UnknownColumnFamilyException that's described in CASSANDRA-14049. As it's a separate issue that's tracked in it's own ticket, I think this issue can be closed. > dtest failure in upgrade_internal_auth_test.TestAuthUpgrade.upgrade_to_22_test > -- > > Key: CASSANDRA-11469 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11469 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Sam Tunnicliffe >Priority: Major > Labels: dtest > Fix For: 2.2.x > > > Upgrading from 2.1 to 2.2, we are seeing failures in > upgrade_internal_auth_test.py. I have seen a variety of stack traces, but the > most common is: > {code} > java.lang.IllegalArgumentException: Unknown keyspace/cf pair > (system_auth.credentials) > at > org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:169) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1383) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1277) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:221) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:176) > ~[main/:na] > at > org.apache.cassandra.auth.PasswordAuthenticator.doAuthenticate(PasswordAuthenticator.java:143) > ~[main/:na] > at > org.apache.cassandra.auth.PasswordAuthenticator.authenticate(PasswordAuthenticator.java:85) > ~[main/:na] > at > org.apache.cassandra.auth.PasswordAuthenticator.access$100(PasswordAuthenticator.java:53) > ~[main/:na] > at > org.apache.cassandra.auth.PasswordAuthenticator$PlainTextSaslAuthenticator.getAuthenticatedUser(PasswordAuthenticator.java:181) > ~[main/:na] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:78) > ~[main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [main/:na] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$700(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext$8.run(AbstractChannelHandlerContext.java:324) > [netty-all-4.0.23.Final.jar:4.0.23.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_51] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51] > {code} > This failure is flaky, but not uncommon. I ran it 300 times here, and saw > about 100 failures on CI: > http://cassci.datastax.com/view/Parameterized/job/parameterized_dtest_multiplexer/53/testReport/ > This is a recent failure, and given the type of failure, it seems like > potentially a bug and not a test issue. [~beobal], you may be most interested > in looking at this? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13464) Failed to create Materialized view with a specific token range
[ https://issues.apache.org/jira/browse/CASSANDRA-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16476814#comment-16476814 ] Damien Stevenson commented on CASSANDRA-13464: -- I have taken the liberty to update [~krishna.koneru]'s patch and make changes based on the comments from [~jasonstack]. Note: I have removed the _testViewAlterBaseTable_ unit test as it no logger makes sense with the changes from CASSANDRA-11500. More specifically: {noformat} 7. Disallow drop of columns on base tables with MVs because we cannot tell if the dropped column is keeping a view row alive (will be fixed on CASSANDRA-13826){noformat} Please have another look and let me know what you think. Thanks. ||Patch||CircleCI|| |[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...damien-instaclustr:13464-mv-token-3.0]|[CirclCI-3.0|https://circleci.com/gh/damien-instaclustr/cassandra/90]| |[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...damien-instaclustr:13464-mv-token-3.11]|[CirclCI-3.11|https://circleci.com/gh/damien-instaclustr/cassandra/88]| |[trunk|https://github.com/apache/cassandra/compare/trunk...damien-instaclustr:13464-mv-token-trunk]|[CirclCI-3.11|https://circleci.com/gh/damien-instaclustr/cassandra/85]| > Failed to create Materialized view with a specific token range > -- > > Key: CASSANDRA-13464 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13464 > Project: Cassandra > Issue Type: Improvement >Reporter: Natsumi Kojima >Assignee: Krishna Dattu Koneru >Priority: Minor > Labels: materializedviews > > Failed to create Materialized view with a specific token range. > Example : > {code:java} > $ ccm create "MaterializedView" -v 3.0.13 > $ ccm populate -n 3 > $ ccm start > $ ccm status > Cluster: 'MaterializedView' > --- > node1: UP > node3: UP > node2: UP > $ccm node1 cqlsh > Connected to MaterializedView at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.0.13 | CQL spec 3.4.0 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class':'SimpleStrategy', > 'replication_factor':3}; > cqlsh> CREATE TABLE test.test ( id text PRIMARY KEY , value1 text , value2 > text, value3 text); > $ccm node1 ring test > Datacenter: datacenter1 > == > AddressRackStatus State LoadOwns > Token > > 3074457345618258602 > 127.0.0.1 rack1 Up Normal 64.86 KB100.00% > -9223372036854775808 > 127.0.0.2 rack1 Up Normal 86.49 KB100.00% > -3074457345618258603 > 127.0.0.3 rack1 Up Normal 89.04 KB100.00% > 3074457345618258602 > $ ccm node1 cqlsh > cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('aaa', > 'aaa', 'aaa' ,'aaa'); > cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('bbb', > 'bbb', 'bbb' ,'bbb'); > cqlsh> SELECT token(id),id,value1 FROM test.test; > system.token(id) | id | value1 > --+-+ > -4737872923231490581 | aaa |aaa > -3071845237020185195 | bbb |bbb > (2 rows) > cqlsh> CREATE MATERIALIZED VIEW test.test_view AS SELECT value1, id FROM > test.test WHERE id IS NOT NULL AND value1 IS NOT NULL AND TOKEN(id) > > -9223372036854775808 AND TOKEN(id) < -3074457345618258603 PRIMARY KEY(value1, > id) WITH CLUSTERING ORDER BY (id ASC); > ServerError: java.lang.ClassCastException: > org.apache.cassandra.cql3.TokenRelation cannot be cast to > org.apache.cassandra.cql3.SingleColumnRelation > {code} > Stacktrace : > {code:java} > INFO [MigrationStage:1] 2017-04-19 18:32:48,131 ColumnFamilyStore.java:389 - > Initializing test.test > WARN [SharedPool-Worker-1] 2017-04-19 18:44:07,263 FBUtilities.java:337 - > Trigger directory doesn't exist, please create it and try again. > ERROR [SharedPool-Worker-1] 2017-04-19 18:46:10,072 QueryMessage.java:128 - > Unexpected error during query > java.lang.ClassCastException: org.apache.cassandra.cql3.TokenRelation cannot > be cast to org.apache.cassandra.cql3.SingleColumnRelation > at > org.apache.cassandra.db.view.View.relationsToWhereClause(View.java:275) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.statements.CreateViewStatement.announceMigration(CreateViewStatement.java:219) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206) > ~[apache-cassandra-3.0.13.jar:3.
[jira] [Updated] (CASSANDRA-12757) NPE if allocate_tokens_for_keyspace is typo/doesn't exist.
[ https://issues.apache.org/jira/browse/CASSANDRA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-12757: - Status: Patch Available (was: Open) > NPE if allocate_tokens_for_keyspace is typo/doesn't exist. > -- > > Key: CASSANDRA-12757 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12757 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Jeremiah Jordan >Priority: Major > Labels: lhf > Fix For: 3.0.x, 3.11.x > > > If the keyspace specified in allocate_tokens_for_keyspace does not exist you > get an NPE. Should probably have a better error here letting people know > what the issue was. > {code} > INFO 21:07:22,582 StorageService.java:1152 - JOINING: getting bootstrap > token > Exception (java.lang.NullPointerException) encountered during startup: null > ERROR 21:07:22,590 CassandraDaemon.java:709 - Exception encountered during > startup > java.lang.NullPointerException: null >at > org.apache.cassandra.db.Keyspace.createReplicationStrategy(Keyspace.java:325) > ~[cassandra-all-3.0.8.jar:3.0.8] >at org.apache.cassandra.db.Keyspace.(Keyspace.java:298) > ~[cassandra-all-3.0.8.jar:3.0.8] >at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129) > ~[cassandra-all-3.0.8.jar:3.0.8] >at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106) > ~[cassandra-all-3.0.8.jar:3.0.8] >at > org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:201) > ~[cassandra-all-3.0.8.jar:3.0.8] >at > org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173) > ~[cassandra-all-3.0.8:3.0.8] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12757) NPE if allocate_tokens_for_keyspace is typo/doesn't exist.
[ https://issues.apache.org/jira/browse/CASSANDRA-12757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16475180#comment-16475180 ] Damien Stevenson commented on CASSANDRA-12757: -- I have tested this too and get the same behaviour that [~Ge] has shared above. However, I do not think that failing with an AssertionError is the most appropriate way to handle the case where an incorrect keyspace value is provided for the allocate_tokens_for_keyspace option. To that end, I have created a patch that aims better handle this situation. Please review and let me know if you have any comments. Patch: |[trunk|https://github.com/apache/cassandra/compare/trunk...damien-instaclustr:12757-NPE-trunk]|[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...damien-instaclustr:12757-NPE-3.11]|[3.0|https://github.com/apache/cassandra/compare/cassandra-3.0...damien-instaclustr:12757-NPE-3.0]|[dtest|https://github.com/apache/cassandra-dtest/compare/master...damien-instaclustr:12757-npe-allocate-tokens-for-keyspace]| CircleCI Unit test results: |[trunk|https://circleci.com/gh/damien-instaclustr/cassandra/56]|[3.11|https://circleci.com/gh/damien-instaclustr/cassandra/51]|[3.0|https://circleci.com/gh/damien-instaclustr/cassandra/54]| > NPE if allocate_tokens_for_keyspace is typo/doesn't exist. > -- > > Key: CASSANDRA-12757 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12757 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Jeremiah Jordan >Priority: Major > Labels: lhf > Fix For: 3.0.x, 3.11.x > > > If the keyspace specified in allocate_tokens_for_keyspace does not exist you > get an NPE. Should probably have a better error here letting people know > what the issue was. > {code} > INFO 21:07:22,582 StorageService.java:1152 - JOINING: getting bootstrap > token > Exception (java.lang.NullPointerException) encountered during startup: null > ERROR 21:07:22,590 CassandraDaemon.java:709 - Exception encountered during > startup > java.lang.NullPointerException: null >at > org.apache.cassandra.db.Keyspace.createReplicationStrategy(Keyspace.java:325) > ~[cassandra-all-3.0.8.jar:3.0.8] >at org.apache.cassandra.db.Keyspace.(Keyspace.java:298) > ~[cassandra-all-3.0.8.jar:3.0.8] >at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129) > ~[cassandra-all-3.0.8.jar:3.0.8] >at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106) > ~[cassandra-all-3.0.8.jar:3.0.8] >at > org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:201) > ~[cassandra-all-3.0.8.jar:3.0.8] >at > org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:173) > ~[cassandra-all-3.0.8:3.0.8] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468486#comment-16468486 ] Damien Stevenson commented on CASSANDRA-10789: -- Thanks for the comments. {quote}We should at least use a set of blacklisted hosts rather than iterating a list. {quote} I have updated the patch to include this. {quote}Connection tracker should probably be updated to be a Multimap so we can look up the connections to kill without iterating. {quote} I'm not sure about this. I don't think it's a straight forward change to do. However, if [~aweisberg] or anyone else is able to provide some pointers on how this might be implemented, I'd happy to work on it. > Allow DBAs to kill individual client sessions from certain IP(s) and > temporarily block subsequent connections without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. > Note due to CQL driver's automated reconnection, simply killing the currently > connected client session will not work well, so the JMX parameter should be > an IP address or a list of IP addresses, so that the Cassandra server can > terminate existing connection with that IP, and block future connection > attempts from that IP for the remaining time until the JVM is restarted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10789: - Attachment: 10789-trunk.txt > Allow DBAs to kill individual client sessions from certain IP(s) and > temporarily block subsequent connections without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. > Note due to CQL driver's automated reconnection, simply killing the currently > connected client session will not work well, so the JMX parameter should be > an IP address or a list of IP addresses, so that the Cassandra server can > terminate existing connection with that IP, and block future connection > attempts from that IP for the remaining time until the JVM is restarted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10789: - Attachment: (was: 10789-trunk.txt) > Allow DBAs to kill individual client sessions from certain IP(s) and > temporarily block subsequent connections without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. > Note due to CQL driver's automated reconnection, simply killing the currently > connected client session will not work well, so the JMX parameter should be > an IP address or a list of IP addresses, so that the Cassandra server can > terminate existing connection with that IP, and block future connection > attempts from that IP for the remaining time until the JVM is restarted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10789: - Attachment: 10789-trunk.txt > Allow DBAs to kill individual client sessions from certain IP(s) and > temporarily block subsequent connections without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. > Note due to CQL driver's automated reconnection, simply killing the currently > connected client session will not work well, so the JMX parameter should be > an IP address or a list of IP addresses, so that the Cassandra server can > terminate existing connection with that IP, and block future connection > attempts from that IP for the remaining time until the JVM is restarted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10789: - Attachment: (was: 10789-trunk.txt) > Allow DBAs to kill individual client sessions from certain IP(s) and > temporarily block subsequent connections without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. > Note due to CQL driver's automated reconnection, simply killing the currently > connected client session will not work well, so the JMX parameter should be > an IP address or a list of IP addresses, so that the Cassandra server can > terminate existing connection with that IP, and block future connection > attempts from that IP for the remaining time until the JVM is restarted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10789: - Status: Patch Available (was: Open) > Allow DBAs to kill individual client sessions without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466923#comment-16466923 ] Damien Stevenson commented on CASSANDRA-10789: -- I have attached a patch for the feature I described in my previous comment. Please review and let me know your thoughts. Thanks. > Allow DBAs to kill individual client sessions without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10789: - Attachment: 10789-trunk-dtest.txt > Allow DBAs to kill individual client sessions without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10789: - Attachment: 10789-trunk.txt > Allow DBAs to kill individual client sessions without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Assignee: Damien Stevenson >Priority: Major > Fix For: 4.x > > Attachments: 10789-trunk.txt > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM
[ https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16461961#comment-16461961 ] Damien Stevenson commented on CASSANDRA-10789: -- I have developed some code to test this out. Killing individual sessions doesn't really work well as the driver's reconnection policy can immediately create another session and there is little to no interruption to the client. I have been testing a feature to blacklist one or more IP addresses/hostnames. If a hostname is blacklisted, it will immediately close any existing connections to that hostname and continue to block any new connections. The hostname will be blocked until Cassandra restarts or the hostname is "un-blacklisted". What do you people think about this implementation? I plan to have a patch available soon. > Allow DBAs to kill individual client sessions without bouncing JVM > -- > > Key: CASSANDRA-10789 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10789 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Wei Deng >Priority: Major > Fix For: 4.x > > > In production, there could be hundreds of clients connected to a Cassandra > cluster (maybe even from different applications), and if they use DataStax > Java Driver, each client will establish at least one TCP connection to a > Cassandra server (see > https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all > normal and at any given time, you can indeed see hundreds of ESTABLISHED > connections to port 9042 on a C* server (from netstat -na). The problem is > that sometimes when a C* cluster is under heavy load, when the DBA identifies > some client session that sends abusive amount of traffic to the C* server and > would like to stop it, they would like a lightweight approach rather than > shutting down the JVM or rolling restart the whole cluster to kill all > hundreds of connections in order to kill a single client session. If the DBA > had root privilege, they would have been able to do something at the OS > network level to achieve the same goal but oftentimes enterprise DBA role is > separate from OS sysadmin role, so the DBAs usually don't have that privilege. > This is especially helpful when you have a multi-tenant C* cluster and you > want to have the impact for handling such client to be minimal to the other > applications. This feature (killing individual session) seems to be a common > feature in other databases (regardless of whether the client has some > reconnect logic or not). It could be implemented as a JMX MBean method and > exposed through nodetool to the DBAs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10023) Emit a metric for number of local read and write calls
[ https://issues.apache.org/jira/browse/CASSANDRA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10023: - Status: Patch Available (was: Open) > Emit a metric for number of local read and write calls > -- > > Key: CASSANDRA-10023 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10023 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: Damien Stevenson >Priority: Minor > Labels: lhf > Fix For: 4.x > > Attachments: 10023-trunk-dtests.txt, 10023-trunk.txt, > CASSANDRA-10023.patch > > > Many C* drivers have feature to be replica aware and chose the co-ordinator > which is a replica. We should add a metric which tells us whether all calls > to the co-ordinator are replica aware. > We have seen issues where client thinks they are replica aware when they > forget to add routing key at various places in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10023) Emit a metric for number of local read and write calls
[ https://issues.apache.org/jira/browse/CASSANDRA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445283#comment-16445283 ] Damien Stevenson commented on CASSANDRA-10023: -- I have attached a patch, including unit tests and dtests. Please review and let me know you thoughts. Cheers. > Emit a metric for number of local read and write calls > -- > > Key: CASSANDRA-10023 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10023 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: Damien Stevenson >Priority: Minor > Labels: lhf > Fix For: 4.x > > Attachments: 10023-trunk-dtests.txt, 10023-trunk.txt, > CASSANDRA-10023.patch > > > Many C* drivers have feature to be replica aware and chose the co-ordinator > which is a replica. We should add a metric which tells us whether all calls > to the co-ordinator are replica aware. > We have seen issues where client thinks they are replica aware when they > forget to add routing key at various places in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10023) Emit a metric for number of local read and write calls
[ https://issues.apache.org/jira/browse/CASSANDRA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10023: - Attachment: 10023-trunk-dtests.txt > Emit a metric for number of local read and write calls > -- > > Key: CASSANDRA-10023 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10023 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: Damien Stevenson >Priority: Minor > Labels: lhf > Fix For: 4.x > > Attachments: 10023-trunk-dtests.txt, 10023-trunk.txt, > CASSANDRA-10023.patch > > > Many C* drivers have feature to be replica aware and chose the co-ordinator > which is a replica. We should add a metric which tells us whether all calls > to the co-ordinator are replica aware. > We have seen issues where client thinks they are replica aware when they > forget to add routing key at various places in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10023) Emit a metric for number of local read and write calls
[ https://issues.apache.org/jira/browse/CASSANDRA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Damien Stevenson updated CASSANDRA-10023: - Attachment: 10023-trunk.txt > Emit a metric for number of local read and write calls > -- > > Key: CASSANDRA-10023 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10023 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: Damien Stevenson >Priority: Minor > Labels: lhf > Fix For: 4.x > > Attachments: 10023-trunk-dtests.txt, 10023-trunk.txt, > CASSANDRA-10023.patch > > > Many C* drivers have feature to be replica aware and chose the co-ordinator > which is a replica. We should add a metric which tells us whether all calls > to the co-ordinator are replica aware. > We have seen issues where client thinks they are replica aware when they > forget to add routing key at various places in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-10023) Emit a metric for number of local read and write calls
[ https://issues.apache.org/jira/browse/CASSANDRA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16427736#comment-16427736 ] Damien Stevenson commented on CASSANDRA-10023: -- Looks like this ticket has gone quiet. I volunteer to work on it if no one has any objections. > Emit a metric for number of local read and write calls > -- > > Key: CASSANDRA-10023 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10023 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: Tushar Agrawal >Priority: Minor > Labels: lhf > Fix For: 4.x > > Attachments: CASSANDRA-10023.patch > > > Many C* drivers have feature to be replica aware and chose the co-ordinator > which is a replica. We should add a metric which tells us whether all calls > to the co-ordinator are replica aware. > We have seen issues where client thinks they are replica aware when they > forget to add routing key at various places in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org