[jira] [Commented] (CASSANDRA-12397) Altering a column's type breaks commitlog replay
[ https://issues.apache.org/jira/browse/CASSANDRA-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528543#comment-15528543 ] Stefania commented on CASSANDRA-12397: -- Reproduced in 3.0 HEAD by following the steps of CASSANDRA-11820: {code} cqlsh> CREATE KEYSPACE ks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; cqlsh> CREATE TABLE ks.test (a int PRIMARY KEY, b int); cqlsh> INSERT INTO ks.test (a, b) VALUES (1, 1); cqlsh> ALTER TABLE ks.test ALTER b TYPE BLOB; cqlsh> SELECT * from ks.test ; {code} At this point restarting the node will give the following exception: {code} INFO 06:00:03 Replaying /home/stefi/git/cstar/cassandra/bin/../data/commitlog/CommitLog-6-1475042184522.log, /home/stefi/git/cstar/cassandra/bin/../data/commitlog/CommitLog-6-1475042184523.log ERROR 06:00:03 Exiting due to error while processing commit log during initialization. org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: Unexpected error deserializing mutation; saved to /tmp/mutation4358621772356735283dat. This may be caused by replaying a mutation against a table with the same name but incompatible schema. Exception follows: java.io.IOError: java.io.EOFException at org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:681) [main/:na] at org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:597) [main/:na] at org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:550) [main/:na] at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:445) [main/:na] at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:145) [main/:na] at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:181) [main/:na] at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:161) [main/:na] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:293) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:568) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:696) [main/:na] {code} Note that the mutation was actually serialized in an sstable that sstabledump can read without problems: {code} sstabledump data/data/ks/test-5b23e8e0854011e69e1cf96348c3ad08/mc-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "liveness_info" : { "tstamp" : "2016-09-28T05:57:05.612504Z" }, "cells" : [ { "name" : "b", "value" : "1" } ] } ] } ] {code} > Altering a column's type breaks commitlog replay > > > Key: CASSANDRA-12397 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12397 > Project: Cassandra > Issue Type: Bug >Reporter: Carl Yeksigian >Assignee: Stefania > > When switching from a fixed-length column to a variable-length column, > replaying the commitlog on restart will have the same issue as > CASSANDRA-11820. Seems like it is related to the schema being flushed and > used when restarted, but commitlogs having been written in the old format. > {noformat} > org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: > Unexpected error deserializing mutation; saved to > /tmp/mutation4816372620457789996dat. This may be caused by replaying a > mutation against a table with the same name but incompatible schema. > Exception follows: java.io.IOError: java.io.EOFException: EOF after 259 bytes > out of 3336 > at > org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:409) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:342) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:201) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:84) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:139) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:177) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:158) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:316) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:591) > [ma
[jira] [Assigned] (CASSANDRA-12397) Altering a column's type breaks commitlog replay
[ https://issues.apache.org/jira/browse/CASSANDRA-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania reassigned CASSANDRA-12397: Assignee: Stefania > Altering a column's type breaks commitlog replay > > > Key: CASSANDRA-12397 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12397 > Project: Cassandra > Issue Type: Bug >Reporter: Carl Yeksigian >Assignee: Stefania > > When switching from a fixed-length column to a variable-length column, > replaying the commitlog on restart will have the same issue as > CASSANDRA-11820. Seems like it is related to the schema being flushed and > used when restarted, but commitlogs having been written in the old format. > {noformat} > org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException: > Unexpected error deserializing mutation; saved to > /tmp/mutation4816372620457789996dat. This may be caused by replaying a > mutation against a table with the same name but incompatible schema. > Exception follows: java.io.IOError: java.io.EOFException: EOF after 259 bytes > out of 3336 > at > org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:409) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:342) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:201) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReader.readAllFiles(CommitLogReader.java:84) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:139) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:177) > [main/:na] > at > org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:158) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:316) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:591) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:720) > [main/:na] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa resolved CASSANDRA-12718. Resolution: Fixed Committed {{c7f6ba8a42944338ec3e7d6793383b5537dfd82a}} , thanks Jon. > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: fixed typos in set example as well as UDT example
Repository: cassandra Updated Branches: refs/heads/trunk d45f323eb -> c7f6ba8a4 fixed typos in set example as well as UDT example Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7f6ba8a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7f6ba8a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7f6ba8a Branch: refs/heads/trunk Commit: c7f6ba8a42944338ec3e7d6793383b5537dfd82a Parents: d45f323 Author: Jon Haddad Authored: Tue Sep 27 20:25:34 2016 -0700 Committer: Jeff Jirsa Committed: Tue Sep 27 22:50:09 2016 -0700 -- doc/source/cql/types.rst | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7f6ba8a/doc/source/cql/types.rst -- diff --git a/doc/source/cql/types.rst b/doc/source/cql/types.rst index e452f35..62e74ec 100644 --- a/doc/source/cql/types.rst +++ b/doc/source/cql/types.rst @@ -281,7 +281,7 @@ A ``set`` is a (sorted) collection of unique values. You can define and insert a VALUES ('cat.jpg', 'jsmith', { 'pet', 'cute' }); // Replace the existing set entirely -UPDATE images SET tags = { 'kitten', 'cat', 'lol' } WHERE id = 'jsmith'; +UPDATE images SET tags = { 'kitten', 'cat', 'lol' } WHERE name = 'cat.jpg'; Further, sets support: @@ -388,7 +388,7 @@ type, including collections or other UDT. For instance:: CREATE TYPE address ( street text, city text, -zip int, +zip text, phones map ) @@ -426,7 +426,7 @@ For instance, one could insert into the table define in the previous section usi zip: '20500', phones: { 'cell' : { country_code: 1, number: '202 456-' }, 'landline' : { country_code: 1, number: '...' } } - } + }, 'work' : { street: '1600 Pennsylvania Ave NW', city: 'Washington',
[jira] [Commented] (CASSANDRA-12698) add json/yaml format option to nodetool status
[ https://issues.apache.org/jira/browse/CASSANDRA-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528481#comment-15528481 ] Shogo Hoshii commented on CASSANDRA-12698: -- [~urandom] Thank you for sharing the information! I will check the tool. > add json/yaml format option to nodetool status > -- > > Key: CASSANDRA-12698 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12698 > Project: Cassandra > Issue Type: Improvement >Reporter: Shogo Hoshii >Assignee: Shogo Hoshii > Attachments: ntstatus_json.patch, sample.json, sample.yaml > > > Hello, > This patch enables nodetool status to be output in json/yaml format. > I think this format could be useful interface for tools that operate or > deploy cassandra. > The format could free tools from parsing the result in their own way. > It would be great if someone would review this patch. > Thank you. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528456#comment-15528456 ] Ben Slater commented on CASSANDRA-12718: Yeah, that's probably the way you'd typically do it although I guess having it as an int illustrates using a non-text data type. > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528452#comment-15528452 ] Jon Haddad commented on CASSANDRA-12718: Fixed the comma before 'work' and changed zip to text. > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528444#comment-15528444 ] Jon Haddad commented on CASSANDRA-12718: It seems like zip should probably be a text field, not an int. > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528435#comment-15528435 ] Ben Slater edited comment on CASSANDRA-12718 at 9/28/16 5:14 AM: - The JSON in the insert also needs to be updated with a couple of typos (comma before work and remove quotes around zip). Probably easiest to do them as one ticket. Fixed version is: INSERT INTO user (name, addresses) VALUES ('z3 Pr3z1den7', { 'home' : { street: '1600 Pennsylvania Ave NW', city: 'Washington', zip: 20500, phones: { 'cell' : { country_code: 1, number: '202 456-' }, 'landline' : { country_code: 1, number: '...' } } }, 'work' : { street: '1600 Pennsylvania Ave NW', city: 'Washington', zip: 20500, phones: { 'fax' : { country_code: 1, number: '...' } } } }) was (Author: slater_ben): The JSON in the insert also needs to be updated with a couple of typos (comma before work and remove quotes around zip). Probably easiest to do them as one ticket. Fixed version is: ```INSERT INTO user (name, addresses) VALUES ('z3 Pr3z1den7', { 'home' : { street: '1600 Pennsylvania Ave NW', city: 'Washington', zip: 20500, phones: { 'cell' : { country_code: 1, number: '202 456-' }, 'landline' : { country_code: 1, number: '...' } } }, 'work' : { street: '1600 Pennsylvania Ave NW', city: 'Washington', zip: 20500, phones: { 'fax' : { country_code: 1, number: '...' } } } })``` > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528435#comment-15528435 ] Ben Slater commented on CASSANDRA-12718: The JSON in the insert also needs to be updated with a couple of typos (comma before work and remove quotes around zip). Probably easiest to do them as one ticket. Fixed version is: ```INSERT INTO user (name, addresses) VALUES ('z3 Pr3z1den7', { 'home' : { street: '1600 Pennsylvania Ave NW', city: 'Washington', zip: 20500, phones: { 'cell' : { country_code: 1, number: '202 456-' }, 'landline' : { country_code: 1, number: '...' } } }, 'work' : { street: '1600 Pennsylvania Ave NW', city: 'Washington', zip: 20500, phones: { 'fax' : { country_code: 1, number: '...' } } } })``` > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528432#comment-15528432 ] Jon Haddad commented on CASSANDRA-12718: Indeed. I force push updated my branch. > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528422#comment-15528422 ] Jeff Jirsa commented on CASSANDRA-12700: Regardless of how you managed to get {{system_auth.roles}} into this state, Cassandra should do something more intelligent than throwing an NPE here. In this case, I think the most sane answer is that if {{is_superuser}} or {{can_login}} is somehow null, we treat that null as false. I've uploaded a patch that makes exactly that assumption: Pushed a branch here for trunk: https://github.com/jeffjirsa/cassandra/commit/cf4d744f1a2e29497fe5dca1246f2d14c0ab6375 UTest output: http://cassci.datastax.com/job/jeffjirsa-cassandra-12700-testall/lastCompletedBuild/testReport/ Dtest change: https://github.com/jeffjirsa/cassandra-dtest/commit/aa1bd2ffdfdd88307068b02506930b1183ed738d Dtest output: http://cassci.datastax.com/job/jeffjirsa-cassandra-12700-dtest/lastCompletedBuild/testReport/ Will review test output in the morning, and hopefully we can get a review and get this fixed shortly thereafter. > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan >Assignee: Jeff Jirsa > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concur
[jira] [Commented] (CASSANDRA-12632) Failure in LogTransactionTest.testUnparsableFirstRecord-compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528308#comment-15528308 ] Stefania commented on CASSANDRA-12632: -- Thanks for the review! The tidiers contain a reference to the parent transaction, and what matters is for this reference to be released. This is done either by calling {{run()}} or {{abort()}} on the tidiers. So either we call one of these methods, or we pass the tidier to {{SSTableReader.markObsolete()}}, which takes care of calling {{run()}} when the sstable reference is released. I checked all the calls to {{LogTransaction.obsoleted()}} in {{LogTransactionTest}}, and it looks good to me, all tidiers are handled by one of the 3 mechanisms just described. Unless objections, I'll consider the non binding +1 a full +1 and commit tomorrow; this patch only modifies a unit test and is border line ninja fix in my opinion. > Failure in LogTransactionTest.testUnparsableFirstRecord-compression > --- > > Key: CASSANDRA-12632 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12632 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Joel Knighton >Assignee: Stefania > Fix For: 3.0.x, 3.x > > > Stacktrace: > {code} > junit.framework.AssertionFailedError: > [/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Index.db, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-TOC.txt, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Filter.db, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Data.db, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc_txn_compaction_73af4e00-78d3-11e6-9858-93d33dad3001.log] > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1228) > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1196) > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.testCorruptRecord(LogTransactionTest.java:1040) > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.testUnparsableFirstRecord(LogTransactionTest.java:988) > {code} > Example failure: > http://cassci.datastax.com/job/cassandra-3.9_testall/89/testReport/junit/org.apache.cassandra.db.lifecycle/LogTransactionTest/testUnparsableFirstRecord_compression/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12461) Add hooks to StorageService shutdown
[ https://issues.apache.org/jira/browse/CASSANDRA-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528282#comment-15528282 ] Stefania commented on CASSANDRA-12461: -- I'm definitely very much in favor of what you've done by unifying the drain code, behaviorally we are changing shutdown according to the table above, notably we will also flush tables without durable writes, recycle commitlog segments, and stop compactions. I think this should be the correct behavior, it may make shutdown a bit longer but operators normally run drain before shutdown anyway and so it would be a no-op during shutdown. I'm not sure if we need this in 3.0 as well, the shutdown hooks are a new feature, but running a full drain on shutdown may be very useful when upgrading, in case operators forget to call drain. Here is the review in detail (some points overlap or cancel each other): * there are 3 dtest failures which are not on trunk, {{cql_tests.LWTTester}} but they seem unrelated and they pass locally, it is probably a glitch in dtests since the code was changed 2 days ago, but let's repeat dtests at least two more times. * shall we rename {{inShutdownHook}} to something like {{draining}} or {{drained}}? * there is still one unprotected call to {{logger.warn}} in {{drain()}}, line 4462, and a call in {{runShutdownHooks()}} * let's update the documentation for {{drain()}} * shall we catch any exceptions in {{drain()}} to ensure the post shutdown hooks are run also if there is an exception? * the documentation says that the post shutdown hooks should only be called on final shutdown, not on drain, is this still the case? * here is a proposal for some extra work (feel free to turn it down): refactor {{setMode()}} to accept an optional log level instead of a boolean, when the optional is empty it should not log, so we could call this method also from the shutdown hook and possibly replace {{inShutdownHook}} with {{operationMode}}, provided this becomes volatile. I would also add an override since most of the times it is called with the boolean set to true, so the override would have the logging level set to INFO. * given that {{drain()}} is synchronized, can we not just look at {{inShutdownHook}} (or {{operationMode}}) at the beginning of {{drain()}}, to solve CASSANDRA-12509? Is there anything else I am missing about it? * the {{StorageService}} import is unused in the Enabled*.java files * we could replace {{runShutdownHooks();}} with {{Throwables.perform(null, postShutdownHooks.stream().map(h -> h::run));}}, logging any non-null returned exceptions, if not in final shutdown. This would not only avoid one method implementation, but also make the log call visible in {{drain()}}. > Add hooks to StorageService shutdown > > > Key: CASSANDRA-12461 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12461 > Project: Cassandra > Issue Type: Bug >Reporter: Anthony Cozzie >Assignee: Anthony Cozzie > Fix For: 3.x > > Attachments: > 0001-CASSANDRA-12461-add-C-support-for-shutdown-runnables.patch > > > The JVM will usually run shutdown hooks in parallel. This can lead to > synchronization problems between Cassandra, services that depend on it, and > services it depends on. This patch adds some simple support for shutdown > hooks to StorageService. > This should nearly solve CASSANDRA-12011 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12718: --- Reviewer: Jeff Jirsa > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12718: --- Assignee: Jon Haddad (was: Jeff Jirsa) > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528267#comment-15528267 ] Jeff Jirsa commented on CASSANDRA-12718: {code} CREATE TABLE images ( name text PRIMARY KEY, owner text, tags set // A set of text values ); INSERT INTO images (name, owner, tags) VALUES ('cat.jpg', 'jsmith', { 'pet', 'cute' }); // Replace the existing set entirely UPDATE images SET tags = { 'kitten', 'cat', 'lol' } WHERE id = 'jsmith'; {code} {{jsmith}} is {{owner}}, not {{name}}, so in addition to changing {{id}} to {{name}}, we (you) should also change {{jsmith}} to {{cat.jpg}}. > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jeff Jirsa >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa reassigned CASSANDRA-12718: -- Assignee: Jeff Jirsa > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Assignee: Jeff Jirsa >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12718: --- Component/s: Documentation and Website > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website >Reporter: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528252#comment-15528252 ] Jon Haddad commented on CASSANDRA-12718: https://github.com/apache/cassandra/pull/74 > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug >Reporter: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12718) typo in cql examples
Jon Haddad created CASSANDRA-12718: -- Summary: typo in cql examples Key: CASSANDRA-12718 URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 Project: Cassandra Issue Type: Bug Reporter: Jon Haddad Priority: Trivial select query using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12718) typo in cql examples
[ https://issues.apache.org/jira/browse/CASSANDRA-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Haddad updated CASSANDRA-12718: --- Description: select query on sets using the wrong field (was: select query using the wrong field) > typo in cql examples > - > > Key: CASSANDRA-12718 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12718 > Project: Cassandra > Issue Type: Bug >Reporter: Jon Haddad >Priority: Trivial > > select query on sets using the wrong field -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa reassigned CASSANDRA-12700: -- Assignee: Jeff Jirsa > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan >Assignee: Jeff Jirsa > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_73] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.7.0.jar:3.7.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] > ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - > Unexpected exception during request; channel = [id: 0x8e2eae00, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0]
[jira] [Commented] (CASSANDRA-8167) sstablesplit tool can be made much faster with few JVM settings
[ https://issues.apache.org/jira/browse/CASSANDRA-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528186#comment-15528186 ] Yuki Morishita commented on CASSANDRA-8167: --- I agree sourcing cassandra-env.sh since current offline SSTable tools launch pseudo-online environment even though they say offline. One problem I'm having is windows scripts. {{cassandra}} launcher is powershell scriptized and sourcing cassandra-env.ps1 is done through function, but offline tools are just batch files. I don't know the approach here. cc [~JoshuaMcKenzie] > sstablesplit tool can be made much faster with few JVM settings > --- > > Key: CASSANDRA-8167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8167 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Nikolai Grigoriev >Assignee: Yuki Morishita >Priority: Trivial > > I had to use sstablesplit tool intensively to split some really huge > sstables. The tool is painfully slow as it does compaction in one single > thread. > I have just found that one one of my machines the tool has crashed when I was > almost done with 152Gb sstable (!!!). > {code} > INFO 16:59:22,342 Writing Memtable-compactions_in_progress@1948660572(0/0 > serialized/live bytes, 1 ops) > INFO 16:59:22,352 Completed flushing > /cassandra-data/disk1/system/compactions_in_progress/system-compactions_in_progress-jb-79242-Data.db > (42 bytes) for commitlog position ReplayPosition(segmentId=1413904450653, > position=69178) > Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit > exceeded > at java.nio.HeapByteBuffer.duplicate(HeapByteBuffer.java:107) > at > org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:586) > at > org.apache.cassandra.utils.ByteBufferUtil.readBytesWithShortLength(ByteBufferUtil.java:596) > at > org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:61) > at > org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:36) > at > org.apache.cassandra.db.RangeTombstoneList$InOrderTester.isDeleted(RangeTombstoneList.java:751) > at > org.apache.cassandra.db.DeletionInfo$InOrderTester.isDeleted(DeletionInfo.java:422) > at > org.apache.cassandra.db.DeletionInfo$InOrderTester.isDeleted(DeletionInfo.java:403) > at > org.apache.cassandra.db.ColumnFamily.hasIrrelevantData(ColumnFamily.java:489) > at > org.apache.cassandra.db.compaction.PrecompactedRow.removeDeleted(PrecompactedRow.java:66) > at > org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:85) > at > org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:196) > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:74) > at > org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:55) > at > org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:204) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) > at > org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:154) > at > org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) > at > org.apache.cassandra.db.compaction.SSTableSplitter.split(SSTableSplitter.java:38) > at > org.apache.cassandra.tools.StandaloneSplitter.main(StandaloneSplitter.java:150) > {code} > This has triggered my desire to see what memory settings are used for JVM > running the tool...and I have found that it runs with default Java settings > (no settings at all). > I have tried to apply the settings from C* itself and this resulted in over > 40% speed increase. It went from ~5Mb/s to 7Mb/s - from the compressed output > perspective. I believe this is mostly due to concurrent GC. I see my CPU > usage has increased to ~200%. But this is fine, this is an offline tool, the > node is down anyway. I know that concurrent GC (at least something like > -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled) > normally improves the performance of even primitive single-th
[jira] [Updated] (CASSANDRA-12580) Fix merkle tree size calculation
[ https://issues.apache.org/jira/browse/CASSANDRA-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-12580: --- Fix Version/s: 3.0.x > Fix merkle tree size calculation > > > Key: CASSANDRA-12580 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12580 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > > On CASSANDRA-5263 it was introduced dynamic merkle tree sizing based on > estimated number of partitions as {{estimatedDepth = lg(numPartitions)}}, but > on > [CompactionManager.doValidationCompaction|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1052] > this is being calculated as: > {{int depth = numPartitions > 0 ? (int) > Math.min(Math.floor(Math.log(numPartitions)), 20) : 0;}} > This is actually calculating {{ln(numPartitions)}} (base-e) instead of > {{lg(numPartitions)}} (base-2), which causes merkle trees to lose resolution, > what may result in overstreaming. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12580) Fix merkle tree size calculation
[ https://issues.apache.org/jira/browse/CASSANDRA-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-12580: --- Status: Ready to Commit (was: Patch Available) > Fix merkle tree size calculation > > > Key: CASSANDRA-12580 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12580 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > > On CASSANDRA-5263 it was introduced dynamic merkle tree sizing based on > estimated number of partitions as {{estimatedDepth = lg(numPartitions)}}, but > on > [CompactionManager.doValidationCompaction|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1052] > this is being calculated as: > {{int depth = numPartitions > 0 ? (int) > Math.min(Math.floor(Math.log(numPartitions)), 20) : 0;}} > This is actually calculating {{ln(numPartitions)}} (base-e) instead of > {{lg(numPartitions)}} (base-2), which causes merkle trees to lose resolution, > what may result in overstreaming. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12580) Fix merkle tree size calculation
[ https://issues.apache.org/jira/browse/CASSANDRA-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-12580: --- Fix Version/s: 3.x 2.2.x 2.1.x > Fix merkle tree size calculation > > > Key: CASSANDRA-12580 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12580 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > > On CASSANDRA-5263 it was introduced dynamic merkle tree sizing based on > estimated number of partitions as {{estimatedDepth = lg(numPartitions)}}, but > on > [CompactionManager.doValidationCompaction|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1052] > this is being calculated as: > {{int depth = numPartitions > 0 ? (int) > Math.min(Math.floor(Math.log(numPartitions)), 20) : 0;}} > This is actually calculating {{ln(numPartitions)}} (base-e) instead of > {{lg(numPartitions)}} (base-2), which causes merkle trees to lose resolution, > what may result in overstreaming. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12580) Fix merkle tree size calculation
[ https://issues.apache.org/jira/browse/CASSANDRA-12580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528174#comment-15528174 ] Yuki Morishita commented on CASSANDRA-12580: Nice catch. Patch looks good to me. > Fix merkle tree size calculation > > > Key: CASSANDRA-12580 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12580 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > > On CASSANDRA-5263 it was introduced dynamic merkle tree sizing based on > estimated number of partitions as {{estimatedDepth = lg(numPartitions)}}, but > on > [CompactionManager.doValidationCompaction|https://github.com/apache/cassandra/blob/cassandra-2.1/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L1052] > this is being calculated as: > {{int depth = numPartitions > 0 ? (int) > Math.min(Math.floor(Math.log(numPartitions)), 20) : 0;}} > This is actually calculating {{ln(numPartitions)}} (base-e) instead of > {{lg(numPartitions)}} (base-2), which causes merkle trees to lose resolution, > what may result in overstreaming. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12700: --- Assignee: (was: Jeff Jirsa) > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_73] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.7.0.jar:3.7.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] > ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - > Unexpected exception during request; channel = [id: 0x8e2eae00, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassand
[jira] [Comment Edited] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528139#comment-15528139 ] Jeff Jirsa edited comment on CASSANDRA-12700 at 9/28/16 2:24 AM: - [~rajesh_con] - just to be clear - do you recall what mechanism you used to create these users/roles? Did you perhaps use {{CREATE ROLE}} , or {{INSERT INTO system_auth.roles...}} ? was (Author: jjirsa): [~rajesh_con] - just to be clear - do you recall what mechanism you used to create these users/roles? > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan >Assignee: Jeff Jirsa > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_73] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.7.0.jar:3.7.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] > ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - > Unexpected exception during request; channel = [id
[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528139#comment-15528139 ] Jeff Jirsa commented on CASSANDRA-12700: [~rajesh_con] - just to be clear - do you recall what mechanism you used to create these users/roles? > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan >Assignee: Jeff Jirsa > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_73] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.7.0.jar:3.7.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] > ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - > Unexpected exception during request; channel = [id: 0x8e2eae00, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at
[jira] [Commented] (CASSANDRA-12681) Reject empty options and invalid DC names in replication configuration while creating or altering a keyspace.
[ https://issues.apache.org/jira/browse/CASSANDRA-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528066#comment-15528066 ] Nachiket Patil commented on CASSANDRA-12681: Thanks for all the help [~jjirsa] > Reject empty options and invalid DC names in replication configuration while > creating or altering a keyspace. > - > > Key: CASSANDRA-12681 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12681 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Nachiket Patil >Assignee: Nachiket Patil >Priority: Minor > Fix For: 3.10, 3.0.10 > > Attachments: trunkpatch.diff, v3.0patch.diff > > > Add some restrictions around create / alter keyspace with > NetworkTopologyStrategy: > 1. Do not accept empty replication configuration (no DC options after class). > Cassandra checks that SimpleStrategy must have replication_factor option but > does not check that at least one DC should be present in the options for > NetworkTopologyStrategy. > 2. Cassandra accepts any random string as DC name replication option for > NetworkTopologyStrategy while creating or altering keyspaces. Add a > restriction that the options specified is valid datacenter name. Using > incorrect value or simple mistake in typing the DC name can cause outage in > production environment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12681) Reject empty options and invalid DC names in replication configuration while creating or altering a keyspace.
[ https://issues.apache.org/jira/browse/CASSANDRA-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15528028#comment-15528028 ] Jeff Jirsa edited comment on CASSANDRA-12681 at 9/28/16 1:29 AM: - Dtest merged this morning (thanks [~philipthompson]!), and they look great (no failures - http://cassci.datastax.com/job/jeffjirsa-cassandra-12681-3.0-dtest/lastCompletedBuild/testReport/ and http://cassci.datastax.com/job/jeffjirsa-cassandra-12681-dtest/lastCompletedBuild/testReport/ ). Committed into 3.0.10 as {{f2c5ad743933498e60e7eef55e8daaa6ce338a03}} , trunk merge as {{d45f323eb972c6fec146e5cfa84fdc47eb8aa5eb}} Thanks [~nachiket_patil]! was (Author: jjirsa): Dtest merged this morning (thanks [~philipthompson]!), and they look great (no failures). Committed into 3.0.10 as {{f2c5ad743933498e60e7eef55e8daaa6ce338a03}} , trunk merge as {{d45f323eb972c6fec146e5cfa84fdc47eb8aa5eb}} Thanks [~nachiket_patil]! > Reject empty options and invalid DC names in replication configuration while > creating or altering a keyspace. > - > > Key: CASSANDRA-12681 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12681 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Nachiket Patil >Assignee: Nachiket Patil >Priority: Minor > Fix For: 3.10, 3.0.10 > > Attachments: trunkpatch.diff, v3.0patch.diff > > > Add some restrictions around create / alter keyspace with > NetworkTopologyStrategy: > 1. Do not accept empty replication configuration (no DC options after class). > Cassandra checks that SimpleStrategy must have replication_factor option but > does not check that at least one DC should be present in the options for > NetworkTopologyStrategy. > 2. Cassandra accepts any random string as DC name replication option for > NetworkTopologyStrategy while creating or altering keyspaces. Add a > restriction that the options specified is valid datacenter name. Using > incorrect value or simple mistake in typing the DC name can cause outage in > production environment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12681) Reject empty options and invalid DC names in replication configuration while creating or altering a keyspace.
[ https://issues.apache.org/jira/browse/CASSANDRA-12681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12681: --- Resolution: Fixed Fix Version/s: (was: 3.0.x) (was: 3.x) 3.0.10 3.10 Status: Resolved (was: Patch Available) Dtest merged this morning (thanks [~philipthompson]!), and they look great (no failures). Committed into 3.0.10 as {{f2c5ad743933498e60e7eef55e8daaa6ce338a03}} , trunk merge as {{d45f323eb972c6fec146e5cfa84fdc47eb8aa5eb}} Thanks [~nachiket_patil]! > Reject empty options and invalid DC names in replication configuration while > creating or altering a keyspace. > - > > Key: CASSANDRA-12681 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12681 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Nachiket Patil >Assignee: Nachiket Patil >Priority: Minor > Fix For: 3.10, 3.0.10 > > Attachments: trunkpatch.diff, v3.0patch.diff > > > Add some restrictions around create / alter keyspace with > NetworkTopologyStrategy: > 1. Do not accept empty replication configuration (no DC options after class). > Cassandra checks that SimpleStrategy must have replication_factor option but > does not check that at least one DC should be present in the options for > NetworkTopologyStrategy. > 2. Cassandra accepts any random string as DC name replication option for > NetworkTopologyStrategy while creating or altering keyspaces. Add a > restriction that the options specified is valid datacenter name. Using > incorrect value or simple mistake in typing the DC name can cause outage in > production environment. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/3] cassandra git commit: Reject invalid DC names as option while creating or altering NetworkTopologyStrategy
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 21d8a7d3b -> f2c5ad743 refs/heads/trunk b80ef9b25 -> d45f323eb Reject invalid DC names as option while creating or altering NetworkTopologyStrategy Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2c5ad74 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2c5ad74 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2c5ad74 Branch: refs/heads/cassandra-3.0 Commit: f2c5ad743933498e60e7eef55e8daaa6ce338a03 Parents: 21d8a7d Author: Nachiket Patil Authored: Fri Aug 5 16:05:34 2016 -0700 Committer: Jeff Jirsa Committed: Tue Sep 27 18:16:26 2016 -0700 -- CHANGES.txt | 1 + NEWS.txt| 11 .../locator/AbstractReplicationStrategy.java| 2 +- .../locator/NetworkTopologyStrategy.java| 39 - .../org/apache/cassandra/cql3/CQLTester.java| 11 .../validation/entities/SecondaryIndexTest.java | 10 .../cql3/validation/operations/AlterTest.java | 47 +++- .../cql3/validation/operations/CreateTest.java | 59 .../apache/cassandra/dht/BootStrapperTest.java | 10 +++- .../org/apache/cassandra/service/MoveTest.java | 9 ++- 10 files changed, 181 insertions(+), 18 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4280abd..6edc491 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -6,6 +6,7 @@ * Extend ColumnIdentifier.internedInstances key to include the type that generated the byte buffer (CASSANDRA-12516) * Backport CASSANDRA-10756 (race condition in NativeTransportService shutdown) (CASSANDRA-12472) * If CF has no clustering columns, any row cache is full partition cache (CASSANDRA-12499) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) Merged from 2.2: * Fix exceptions when enabling gossip on nodes that haven't joined the ring (CASSANDRA-12253) * Fix authentication problem when invoking clqsh copy from a SOURCE command (CASSANDRA-12642) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 0bd3920..b97a420 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -13,6 +13,17 @@ restore snapshots created with the previous major version using the 'sstableloader' tool. You can upgrade the file format of your snapshots using the provided 'sstableupgrade' tool. +3.0.10 += + +Upgrading +- + - To protect against accidental data loss, cassandra no longer allows + users to set arbitrary datacenter names for NetworkTopologyStrategy. + Cassandra will allow users to continue using existing keyspaces + with invalid datacenter names, but will validat DC names on CREATE and + ALTER + 3.0.9 = http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java -- diff --git a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java index c90c6a1..d72c0c2 100644 --- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java +++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java @@ -319,7 +319,7 @@ public abstract class AbstractReplicationStrategy } } -private void validateExpectedOptions() throws ConfigurationException +protected void validateExpectedOptions() throws ConfigurationException { Collection expectedOptions = recognizedOptions(); if (expectedOptions == null) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java -- diff --git a/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java b/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java index 7c8d95e..78f5b06 100644 --- a/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java +++ b/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java @@ -24,9 +24,11 @@ import java.util.Map.Entry; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.dht.Token; import org.apache.cassandra.locator.TokenMetadata.Topology; +impo
[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d45f323e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d45f323e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d45f323e Branch: refs/heads/trunk Commit: d45f323eb972c6fec146e5cfa84fdc47eb8aa5eb Parents: b80ef9b f2c5ad7 Author: Jeff Jirsa Authored: Tue Sep 27 18:16:55 2016 -0700 Committer: Jeff Jirsa Committed: Tue Sep 27 18:26:17 2016 -0700 -- CHANGES.txt | 1 + NEWS.txt| 3 + .../locator/AbstractReplicationStrategy.java| 2 +- .../locator/NetworkTopologyStrategy.java| 41 ++ .../org/apache/cassandra/cql3/CQLTester.java| 11 .../validation/entities/SecondaryIndexTest.java | 10 .../cql3/validation/operations/AlterTest.java | 47 +++- .../cql3/validation/operations/CreateTest.java | 59 .../apache/cassandra/dht/BootStrapperTest.java | 8 +++ .../org/apache/cassandra/service/MoveTest.java | 9 ++- 10 files changed, 176 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d45f323e/CHANGES.txt -- diff --cc CHANGES.txt index 75e7d2a,6edc491..7a5d73a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -88,58 -33,12 +88,59 @@@ Merged from 3.0 * Disk failure policy should not be invoked on out of space (CASSANDRA-12385) * Calculate last compacted key on startup (CASSANDRA-6216) * Add schema to snapshot manifest, add USING TIMESTAMP clause to ALTER TABLE statements (CASSANDRA-7190) + * If CF has no clustering columns, any row cache is full partition cache (CASSANDRA-12499) ++ * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) +Merged from 2.2: + * Make Collections deserialization more robust (CASSANDRA-12618) + * Fix exceptions when enabling gossip on nodes that haven't joined the ring (CASSANDRA-12253) + * Fix authentication problem when invoking clqsh copy from a SOURCE command (CASSANDRA-12642) + * Decrement pending range calculator jobs counter in finally block + * cqlshlib tests: increase default execute timeout (CASSANDRA-12481) + * Forward writes to replacement node when replace_address != broadcast_address (CASSANDRA-8523) + * Fail repair on non-existing table (CASSANDRA-12279) + * Enable repair -pr and -local together (fix regression of CASSANDRA-7450) (CASSANDRA-12522) + + +3.8, 3.9 + * Fix value skipping with counter columns (CASSANDRA-11726) + * Fix nodetool tablestats miss SSTable count (CASSANDRA-12205) + * Fixed flacky SSTablesIteratedTest (CASSANDRA-12282) + * Fixed flacky SSTableRewriterTest: check file counts before calling validateCFS (CASSANDRA-12348) + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189) + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109) + * RTE from new CDC column breaks in flight queries (CASSANDRA-12236) + * Fix hdr logging for single operation workloads (CASSANDRA-12145) + * Fix SASI PREFIX search in CONTAINS mode with partial terms (CASSANDRA-12073) + * Increase size of flushExecutor thread pool (CASSANDRA-12071) + * Partial revert of CASSANDRA-11971, cannot recycle buffer in SP.sendMessagesToNonlocalDC (CASSANDRA-11950) + * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034) + * Improve details in compaction log message (CASSANDRA-12080) + * Allow unset values in CQLSSTableWriter (CASSANDRA-11911) + * Chunk cache to request compressor-compatible buffers if pool space is exhausted (CASSANDRA-11993) + * Remove DatabaseDescriptor dependencies from SequentialWriter (CASSANDRA-11579) + * Move skip_stop_words filter before stemming (CASSANDRA-12078) + * Support seek() in EncryptedFileSegmentInputStream (CASSANDRA-11957) + * SSTable tools mishandling LocalPartitioner (CASSANDRA-12002) + * When SEPWorker assigned work, set thread name to match pool (CASSANDRA-11966) + * Add cross-DC latency metrics (CASSANDRA-11596) + * Allow terms in selection clause (CASSANDRA-10783) + * Add bind variables to trace (CASSANDRA-11719) + * Switch counter shards' clock to timestamps (CASSANDRA-9811) + * Introduce HdrHistogram and response/service/wait separation to stress tool (CASSANDRA-11853) + * entry-weighers in QueryProcessor should respect partitionKeyBindIndexes field (CASSANDRA-11718) + * Support older ant versions (CASSANDRA-11807) + * Estimate compressed on disk size when deciding if sstable size limit reached (CASSANDRA-11623) + * cassandra-stress profiles should support case sensitive schemas (CASSANDRA-11546) + * Remove DatabaseDescriptor dependency from File
[2/3] cassandra git commit: Reject invalid DC names as option while creating or altering NetworkTopologyStrategy
Reject invalid DC names as option while creating or altering NetworkTopologyStrategy Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2c5ad74 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2c5ad74 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2c5ad74 Branch: refs/heads/trunk Commit: f2c5ad743933498e60e7eef55e8daaa6ce338a03 Parents: 21d8a7d Author: Nachiket Patil Authored: Fri Aug 5 16:05:34 2016 -0700 Committer: Jeff Jirsa Committed: Tue Sep 27 18:16:26 2016 -0700 -- CHANGES.txt | 1 + NEWS.txt| 11 .../locator/AbstractReplicationStrategy.java| 2 +- .../locator/NetworkTopologyStrategy.java| 39 - .../org/apache/cassandra/cql3/CQLTester.java| 11 .../validation/entities/SecondaryIndexTest.java | 10 .../cql3/validation/operations/AlterTest.java | 47 +++- .../cql3/validation/operations/CreateTest.java | 59 .../apache/cassandra/dht/BootStrapperTest.java | 10 +++- .../org/apache/cassandra/service/MoveTest.java | 9 ++- 10 files changed, 181 insertions(+), 18 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 4280abd..6edc491 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -6,6 +6,7 @@ * Extend ColumnIdentifier.internedInstances key to include the type that generated the byte buffer (CASSANDRA-12516) * Backport CASSANDRA-10756 (race condition in NativeTransportService shutdown) (CASSANDRA-12472) * If CF has no clustering columns, any row cache is full partition cache (CASSANDRA-12499) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) Merged from 2.2: * Fix exceptions when enabling gossip on nodes that haven't joined the ring (CASSANDRA-12253) * Fix authentication problem when invoking clqsh copy from a SOURCE command (CASSANDRA-12642) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 0bd3920..b97a420 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -13,6 +13,17 @@ restore snapshots created with the previous major version using the 'sstableloader' tool. You can upgrade the file format of your snapshots using the provided 'sstableupgrade' tool. +3.0.10 += + +Upgrading +- + - To protect against accidental data loss, cassandra no longer allows + users to set arbitrary datacenter names for NetworkTopologyStrategy. + Cassandra will allow users to continue using existing keyspaces + with invalid datacenter names, but will validat DC names on CREATE and + ALTER + 3.0.9 = http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java -- diff --git a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java index c90c6a1..d72c0c2 100644 --- a/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java +++ b/src/java/org/apache/cassandra/locator/AbstractReplicationStrategy.java @@ -319,7 +319,7 @@ public abstract class AbstractReplicationStrategy } } -private void validateExpectedOptions() throws ConfigurationException +protected void validateExpectedOptions() throws ConfigurationException { Collection expectedOptions = recognizedOptions(); if (expectedOptions == null) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2c5ad74/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java -- diff --git a/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java b/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java index 7c8d95e..78f5b06 100644 --- a/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java +++ b/src/java/org/apache/cassandra/locator/NetworkTopologyStrategy.java @@ -24,9 +24,11 @@ import java.util.Map.Entry; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.exceptions.ConfigurationException; import org.apache.cassandra.dht.Token; import org.apache.cassandra.locator.TokenMetadata.Topology; +import org.apache.cassandra.service.StorageService; import org.apache.cassandra.utils.FBUtilities; import com.google.common.collect.Multimap;
[jira] [Commented] (CASSANDRA-12694) PAXOS Update Corrupted empty row exception
[ https://issues.apache.org/jira/browse/CASSANDRA-12694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527975#comment-15527975 ] Alwyn Davis commented on CASSANDRA-12694: - I think this problem is being caused by this addition: https://github.com/ifesdjeen/cassandra/commit/ef9225fea660b46ed4905c10b91e7efe2746da5b#diff-c06541855022eca5fd794dd24ff02f89 which was added for CASSANDRA-9530. In this scenario, I think the CAS check is correctly returning an empty row (the matching row has a null value) which the above change errors out on, before StorageProxy.cas can check if the row applies to the CAS conditions. I'm not sure what the impact of removing the check is, as the comment indicates that it's also repeated for compactions. > PAXOS Update Corrupted empty row exception > -- > > Key: CASSANDRA-12694 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12694 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths > Environment: 3 node cluster using RF=3 running on cassandra 3.7 >Reporter: Cameron Zemek > > {noformat} > cqlsh> create table test.test (test_id TEXT, last_updated TIMESTAMP, > message_id TEXT, PRIMARY KEY(test_id)); > update test.test set last_updated = 1474494363669 where test_id = 'test1' if > message_id = null; > {noformat} > Then nodetool flush on the all 3 nodes. > {noformat} > cqlsh> update test.test set last_updated = 1474494363669 where test_id = > 'test1' if message_id = null; > ServerError: > {noformat} > From cassandra log > {noformat} > ERROR [SharedPool-Worker-1] 2016-09-23 12:09:13,179 Message.java:611 - > Unexpected exception during request; channel = [id: 0x7a22599e, > L:/127.0.0.1:9042 - R:/127.0.0.1:58297] > java.io.IOError: java.io.IOException: Corrupt empty row found in unfiltered > partition > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:224) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:212) > ~[main/:na] > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[main/:na] > at > org.apache.cassandra.db.rows.UnfilteredRowIterators.digest(UnfilteredRowIterators.java:125) > ~[main/:na] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators.digest(UnfilteredPartitionIterators.java:249) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse.makeDigest(ReadResponse.java:87) > ~[main/:na] > at > org.apache.cassandra.db.ReadResponse$DataResponse.digest(ReadResponse.java:192) > ~[main/:na] > at > org.apache.cassandra.service.DigestResolver.resolve(DigestResolver.java:80) > ~[main/:na] > at > org.apache.cassandra.service.ReadCallback.get(ReadCallback.java:139) > ~[main/:na] > at > org.apache.cassandra.service.AbstractReadExecutor.get(AbstractReadExecutor.java:145) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.awaitResultsAndRetryOnDigestMismatch(StorageProxy.java:1714) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1663) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1604) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1523) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readOne(StorageProxy.java:1497) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readOne(StorageProxy.java:1491) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.cas(StorageProxy.java:249) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.ModificationStatement.executeWithCondition(ModificationStatement.java:441) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.ModificationStatement.execute(ModificationStatement.java:416) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:208) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:239) > ~[main/:na] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:224) > ~[main/:na] > at > org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) > ~[main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [main/:na] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#
[jira] [Commented] (CASSANDRA-11363) High Blocked NTR When Connecting
[ https://issues.apache.org/jira/browse/CASSANDRA-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527840#comment-15527840 ] Jeremiah Jordan commented on CASSANDRA-11363: - For future reference this went into 3.0.10 not 3.0.9 > High Blocked NTR When Connecting > > > Key: CASSANDRA-11363 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11363 > Project: Cassandra > Issue Type: Bug > Components: Coordination >Reporter: Russell Bradberry >Assignee: T Jake Luciani > Fix For: 2.1.16, 2.2.8, 3.10, 3.0.10 > > Attachments: cassandra-102-cms.stack, cassandra-102-g1gc.stack, > max_queued_ntr_property.txt, thread-queue-2.1.txt > > > When upgrading from 2.1.9 to 2.1.13, we are witnessing an issue where the > machine load increases to very high levels (> 120 on an 8 core machine) and > native transport requests get blocked in tpstats. > I was able to reproduce this in both CMS and G1GC as well as on JVM 7 and 8. > The issue does not seem to affect the nodes running 2.1.9. > The issue seems to coincide with the number of connections OR the number of > total requests being processed at a given time (as the latter increases with > the former in our system) > Currently there is between 600 and 800 client connections on each machine and > each machine is handling roughly 2000-3000 client requests per second. > Disabling the binary protocol fixes the issue for this node but isn't a > viable option cluster-wide. > Here is the output from tpstats: > {code} > Pool NameActive Pending Completed Blocked All > time blocked > MutationStage 0 88387821 0 > 0 > ReadStage 0 0 355860 0 > 0 > RequestResponseStage 0 72532457 0 > 0 > ReadRepairStage 0 0150 0 > 0 > CounterMutationStage 32 104 897560 0 > 0 > MiscStage 0 0 0 0 > 0 > HintedHandoff 0 0 65 0 > 0 > GossipStage 0 0 2338 0 > 0 > CacheCleanupExecutor 0 0 0 0 > 0 > InternalResponseStage 0 0 0 0 > 0 > CommitLogArchiver 0 0 0 0 > 0 > CompactionExecutor2 190474 0 > 0 > ValidationExecutor0 0 0 0 > 0 > MigrationStage0 0 10 0 > 0 > AntiEntropyStage 0 0 0 0 > 0 > PendingRangeCalculator0 0310 0 > 0 > Sampler 0 0 0 0 > 0 > MemtableFlushWriter 110 94 0 > 0 > MemtablePostFlush 134257 0 > 0 > MemtableReclaimMemory 0 0 94 0 > 0 > Native-Transport-Requests 128 156 38795716 > 278451 > Message type Dropped > READ 0 > RANGE_SLICE 0 > _TRACE 0 > MUTATION 0 > COUNTER_MUTATION 0 > BINARY 0 > REQUEST_RESPONSE 0 > PAGED_RANGE 0 > READ_REPAIR 0 > {code} > Attached is the jstack output for both CMS and G1GC. > Flight recordings are here: > https://s3.amazonaws.com/simple-logs/cassandra-102-cms.jfr > https://s3.amazonaws.com/simple-logs/cassandra-102-g1gc.jfr > It is interesting to note that while the flight recording was taking place, > the load on the machine went back to healthy, and when the flight recording > finished the load went back to > 100. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12248) Allow tuning compaction thread count at runtime
[ https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527827#comment-15527827 ] Nate McCall commented on CASSANDRA-12248: - Commited fix as b80ef9b2580c123da90879b4456606ef5b01b6f2 > Allow tuning compaction thread count at runtime > --- > > Key: CASSANDRA-12248 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12248 > Project: Cassandra > Issue Type: Improvement >Reporter: Tom van der Woerdt >Assignee: Dikang Gu >Priority: Minor > Fix For: 3.10 > > > While bootstrapping new nodes it can take a significant amount of time to > catch up on compaction or 2i builds. In these cases it would be convenient to > have a nodetool command that allows changing the number of concurrent > compaction jobs to the amount of cores on the machine. > Alternatively, an even better variant of this would be to have a setting > "bootstrap_max_concurrent_compactors" which overrides the normal setting > during bootstrap only. Saves me from having to write a script that does it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-12248) Allow tuning compaction thread count at runtime
[ https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nate McCall resolved CASSANDRA-12248. - Resolution: Fixed > Allow tuning compaction thread count at runtime > --- > > Key: CASSANDRA-12248 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12248 > Project: Cassandra > Issue Type: Improvement >Reporter: Tom van der Woerdt >Assignee: Dikang Gu >Priority: Minor > Fix For: 3.10 > > > While bootstrapping new nodes it can take a significant amount of time to > catch up on compaction or 2i builds. In these cases it would be convenient to > have a nodetool command that allows changing the number of concurrent > compaction jobs to the amount of cores on the machine. > Alternatively, an even better variant of this would be to have a setting > "bootstrap_max_concurrent_compactors" which overrides the normal setting > during bootstrap only. Saves me from having to write a script that does it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Ninja fix for invocation order of core and max pool size.
Repository: cassandra Updated Branches: refs/heads/trunk 12f5ca36f -> b80ef9b25 Ninja fix for invocation order of core and max pool size. Fixes 979af884 for CASSANDRA-12248 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b80ef9b2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b80ef9b2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b80ef9b2 Branch: refs/heads/trunk Commit: b80ef9b2580c123da90879b4456606ef5b01b6f2 Parents: 12f5ca3 Author: Nate McCall Authored: Wed Sep 28 12:51:51 2016 +1300 Committer: Nate McCall Committed: Wed Sep 28 12:51:51 2016 +1300 -- src/java/org/apache/cassandra/db/compaction/CompactionManager.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b80ef9b2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java index bad0bdf..148a4fb 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionManager.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionManager.java @@ -1865,8 +1865,8 @@ public class CompactionManager implements CompactionManagerMBean public void setConcurrentCompactors(int value) { -executor.setCorePoolSize(value); executor.setMaximumPoolSize(value); +executor.setCorePoolSize(value); } public int getCoreCompactorThreads()
[jira] [Comment Edited] (CASSANDRA-8457) nio MessagingService
[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527775#comment-15527775 ] Jason Brown edited comment on CASSANDRA-8457 at 9/27/16 11:31 PM: -- TL;DR - I've addressed everything except for the interaction between {{ClientHandshakeHandler}} and {{InternodeMessagingConnection}} (both are now renamed). I've noticed the odd rub there, as well, for a while, and I'll take some time to reconsider it. re: "talking points" - Backward compatibility - bit the bullet, and just yanked the old code - streaming - [~slebresne] and I talked offline, and CASSANDRA-12229 will address the streaming parts, and will be worked on/reviewed concurrently. Both tickets will be committed together to avoid breaking streaming. re: comments section 1 - Netty openssl - when I implemented this back in February, there was no mechanism to use {{KeyFactoryManager}} with the OpenSSL implementation. Fortunately, this has changed since I last checked in, so I've deleted the extra {{keyfile}} and friends entries from the yaml/{{Config}}. - "old code" - deleted now - package javadoc - I absolutely want this :), I just want things to be more solid code-wise before diving into that work. - naming - names are now more consistent using In/Out (or Inbound/Outbound), and use of client/server is removed. re: comments section 2 - {{getSocketThreads()}} - I've removed this for now, and will be resolved with CASSANDRA-12229 - {{MessagingService}} renames - done - {{MessagingService#createConnection()}} In the previous implementation, {{OutboundTcpConnectionPool}} only blocked on creating the threads for it's wrapped {{OutboundTcpConnection}} instances (gossip, large, and small messages). No sockets were actually opened until a message was actually sent to that peer {{OutboundTcpConnection#connect()}}. Since we do not spawn a separate thread for each connection type (even though we will have separate sockets), I don't think it's necessary to block {{MessagingService#createConnection()}}, or more correctly now, {{MessagingService.NettySender#getMessagingConnection()}}. - "Seems {{NettySender.listen()}} always starts a non-secure connection" - You are correct; however, looks like we've always been doing it that way (for better or worse). I've gone ahead and made the change (it's a one liner, plus a couple extra for error checking). - {{ClientConnect#connectComplete}} - I've renamed the function to be more accurate ({{connectCallback}}). - {{CoalescingMessageOutHandler}} - done Other issues resolved, as well. Branch has been pushed (with several commits at the top) and tests running. was (Author: jasobrown): TL;DR - I've addressed everything except for the interaction between {{ClientHandshakeHandler}} and {{Interno -deMessagingConnection}} (both are noew renamed). I've noticed the odd rub there, as well, for a while, and I'll take some time to reconsider it. re: "talking points" - Backward compatibility - bit the bullet, and just yanked the old code - streaming - [~slebresne] and I talked offline, and CASSANDRA-12229 will address the streaming parts, and will be worked on/reviewed concurrently. Both tickets will be committed together to avoid breaking streaming. re: comments section 1 - Netty openssl - when I implemented this back in February, there was no mechanism to use {{KeyFactoryManager}} with the OpenSSL implementaion. Fortunately, this has changed since I last checked in, so I've deleted the extra {{keyfile}} and friends entries from the yaml/{{Config}}. - "old code" - deleted now - package javadoc - I absolutely want this :), I just want things to be more solid code-wise before diving into that work. - naming - names are now more consistent using In/Out (or Inbound/Outbound), and use of client/server is removed. re: comments section 2 - {{getSocketThreads()}} - I've removed this for now, and will be resolved with CASSANDRA-12229 - {{MessagingService}} renames - done - {{MessagingService#createConnection()}} In the previous implementation, {{OutboundTcpConnectionPool}} only blocked on creating the threads for it's wrapped {{OutboundTcpConnection}} instances (gossip, large, and small messages). No sockets were actually opened until a message was actually sent to that peer {{OutboundTcpConnection#connect()}}. Since we do not spawn a separate thread for each connection type (even though we will have separate sockets), I don't think it's necessary to block {{MessagingService#createConnection()}}, or more correctly now, {{MessagingService.NettySender#getMessagingConnection()}}. - "Seems {{NettySender.listen()}} always starts a non-secure connection" - You are correct; however, looks like we've always been doing it that way (for better or worse). I've gone ahead and made the change (it's a one liner, plus a couple extra for error checking
[jira] [Commented] (CASSANDRA-8457) nio MessagingService
[ https://issues.apache.org/jira/browse/CASSANDRA-8457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527775#comment-15527775 ] Jason Brown commented on CASSANDRA-8457: TL;DR - I've addressed everything except for the interaction between {{ClientHandshakeHandler}} and {{Interno -deMessagingConnection}} (both are noew renamed). I've noticed the odd rub there, as well, for a while, and I'll take some time to reconsider it. re: "talking points" - Backward compatibility - bit the bullet, and just yanked the old code - streaming - [~slebresne] and I talked offline, and CASSANDRA-12229 will address the streaming parts, and will be worked on/reviewed concurrently. Both tickets will be committed together to avoid breaking streaming. re: comments section 1 - Netty openssl - when I implemented this back in February, there was no mechanism to use {{KeyFactoryManager}} with the OpenSSL implementaion. Fortunately, this has changed since I last checked in, so I've deleted the extra {{keyfile}} and friends entries from the yaml/{{Config}}. - "old code" - deleted now - package javadoc - I absolutely want this :), I just want things to be more solid code-wise before diving into that work. - naming - names are now more consistent using In/Out (or Inbound/Outbound), and use of client/server is removed. re: comments section 2 - {{getSocketThreads()}} - I've removed this for now, and will be resolved with CASSANDRA-12229 - {{MessagingService}} renames - done - {{MessagingService#createConnection()}} In the previous implementation, {{OutboundTcpConnectionPool}} only blocked on creating the threads for it's wrapped {{OutboundTcpConnection}} instances (gossip, large, and small messages). No sockets were actually opened until a message was actually sent to that peer {{OutboundTcpConnection#connect()}}. Since we do not spawn a separate thread for each connection type (even though we will have separate sockets), I don't think it's necessary to block {{MessagingService#createConnection()}}, or more correctly now, {{MessagingService.NettySender#getMessagingConnection()}}. - "Seems {{NettySender.listen()}} always starts a non-secure connection" - You are correct; however, looks like we've always been doing it that way (for better or worse). I've gone ahead and made the change (it's a one liner, plus a couple extra for error checking). - {{ClientConnect#connectComplete}} - I've renmaed the function to be more accurate ({{connectCallback}}). - {{CoalescingMessageOutHandler}} - done Other issues resolved, as well. Branch has been pushed (with several commits at the top) and tests running. > nio MessagingService > > > Key: CASSANDRA-8457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8457 > Project: Cassandra > Issue Type: New Feature >Reporter: Jonathan Ellis >Assignee: Jason Brown >Priority: Minor > Labels: netty, performance > Fix For: 4.x > > > Thread-per-peer (actually two each incoming and outbound) is a big > contributor to context switching, especially for larger clusters. Let's look > at switching to nio, possibly via Netty. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527085#comment-15527085 ] Dikang Gu edited comment on CASSANDRA-11550 at 9/27/16 11:25 PM: - [~krummas], Here is a patch based on latest trunk, do you mind to take a look? https://github.com/DikangGu/cassandra/commit/8a6fe474fc3d40be5257f28848cf41e2b4dd5f10 Thanks! was (Author: dikanggu): [~krummas], Here is a patch based on latest trunk, do you mind to take a look? https://github.com/DikangGu/cassandra/commit/4fa22f1776c636f49d94e9e5dd81d12e56df278b Thanks! > Make the fanout size for LeveledCompactionStrategy to be configurable > - > > Key: CASSANDRA-11550 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11550 > Project: Cassandra > Issue Type: New Feature > Components: Compaction >Reporter: Dikang Gu >Assignee: Dikang Gu > Labels: lcs > Fix For: 3.x > > > Currently, the fanout size for LeveledCompactionStrategy is hard coded in the > system (10). It would be useful to make the fanout size to be tunable, so > that we can change it according to different use cases. > Further more, we can change the size dynamically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527085#comment-15527085 ] Dikang Gu edited comment on CASSANDRA-11550 at 9/27/16 11:22 PM: - [~krummas], Here is a patch based on latest trunk, do you mind to take a look? https://github.com/DikangGu/cassandra/commit/4fa22f1776c636f49d94e9e5dd81d12e56df278b Thanks! was (Author: dikanggu): [~krummas], Here is a patch based on latest trunk, do you mind to take a look? https://github.com/DikangGu/cassandra/commit/2242231bc575acd42d0ed5a4112fee5db32012d3 Thanks! > Make the fanout size for LeveledCompactionStrategy to be configurable > - > > Key: CASSANDRA-11550 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11550 > Project: Cassandra > Issue Type: New Feature > Components: Compaction >Reporter: Dikang Gu >Assignee: Dikang Gu > Labels: lcs > Fix For: 3.x > > > Currently, the fanout size for LeveledCompactionStrategy is hard coded in the > system (10). It would be useful to make the fanout size to be tunable, so > that we can change it according to different use cases. > Further more, we can change the size dynamically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12248) Allow tuning compaction thread count at runtime
[ https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527741#comment-15527741 ] Nate McCall commented on CASSANDRA-12248: - Yeah, I missed it. [~dikanggu] Jake is correct. CompactionManager.setConcurrentCompactors should have the setMaxPoolSize first, followed by core. I've actually done this wrong when poking these via JMX (slide 75 here: http://www.slideshare.net/zznate/advanced-apache-cassandra-operations-with-jmx). Good catch, [~tjake]. > Allow tuning compaction thread count at runtime > --- > > Key: CASSANDRA-12248 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12248 > Project: Cassandra > Issue Type: Improvement >Reporter: Tom van der Woerdt >Assignee: Dikang Gu >Priority: Minor > Fix For: 3.10 > > > While bootstrapping new nodes it can take a significant amount of time to > catch up on compaction or 2i builds. In these cases it would be convenient to > have a nodetool command that allows changing the number of concurrent > compaction jobs to the amount of cores on the machine. > Alternatively, an even better variant of this would be to have a setting > "bootstrap_max_concurrent_compactors" which overrides the normal setting > during bootstrap only. Saves me from having to write a script that does it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12715) Fix exceptions with the new vnode allocation.
[ https://issues.apache.org/jira/browse/CASSANDRA-12715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527705#comment-15527705 ] Dikang Gu commented on CASSANDRA-12715: --- [~blambov], I modified current unit test a bit, so that it can cover this case as well. Here is the new patch, https://github.com/DikangGu/cassandra/commit/57b6a000f39315b027ffe4caa499806c17c33240 > Fix exceptions with the new vnode allocation. > - > > Key: CASSANDRA-12715 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12715 > Project: Cassandra > Issue Type: Bug >Reporter: Dikang Gu >Assignee: Dikang Gu > Fix For: 3.0.x, 3.x > > > Problem: see exceptions when bootstrapping nodes using the new vnode > allocation algorithm. I'm able to reproduce it in trunk as well: > {code} > INFO [main] 2016-09-26 15:36:54,978 StorageService.java:1437 - JOINING: > calculation complete, ready to bootstrap > INFO [main] 2016-09-26 15:36:54,978 StorageService.java:1437 - JOINING: > getting bootstrap token > ERROR [main] 2016-09-26 15:36:54,989 CassandraDaemon.java:752 - Exception > encountered during startup > java.lang.AssertionError: null > at > org.apache.cassandra.locator.TokenMetadata.getTopology(TokenMetadata.java:1209) > ~[main/:na] > at > org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:201) > ~[main/:na] > at > org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:164) > ~[main/:na] > at > org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:54) > ~[main/:na] > at > org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:207) > ~[main/:na] > at > org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:174) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:929) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:697) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:582) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:392) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:735) > [main/:na] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12715) Fix exceptions with the new vnode allocation.
[ https://issues.apache.org/jira/browse/CASSANDRA-12715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15524639#comment-15524639 ] Dikang Gu edited comment on CASSANDRA-12715 at 9/27/16 11:00 PM: - I have a fix for this, [~blambov], do you mind to take a look? Thanks! was (Author: dikanggu): I have a fix for this, [~blambov], do you mind to take a look? https://github.com/DikangGu/cassandra/commit/9a4d4153dcfae7a95992b5506b50db12a95233b6 Thanks! > Fix exceptions with the new vnode allocation. > - > > Key: CASSANDRA-12715 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12715 > Project: Cassandra > Issue Type: Bug >Reporter: Dikang Gu >Assignee: Dikang Gu > Fix For: 3.0.x, 3.x > > > Problem: see exceptions when bootstrapping nodes using the new vnode > allocation algorithm. I'm able to reproduce it in trunk as well: > {code} > INFO [main] 2016-09-26 15:36:54,978 StorageService.java:1437 - JOINING: > calculation complete, ready to bootstrap > INFO [main] 2016-09-26 15:36:54,978 StorageService.java:1437 - JOINING: > getting bootstrap token > ERROR [main] 2016-09-26 15:36:54,989 CassandraDaemon.java:752 - Exception > encountered during startup > java.lang.AssertionError: null > at > org.apache.cassandra.locator.TokenMetadata.getTopology(TokenMetadata.java:1209) > ~[main/:na] > at > org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:201) > ~[main/:na] > at > org.apache.cassandra.dht.tokenallocator.TokenAllocation.getStrategy(TokenAllocation.java:164) > ~[main/:na] > at > org.apache.cassandra.dht.tokenallocator.TokenAllocation.allocateTokens(TokenAllocation.java:54) > ~[main/:na] > at > org.apache.cassandra.dht.BootStrapper.allocateTokens(BootStrapper.java:207) > ~[main/:na] > at > org.apache.cassandra.dht.BootStrapper.getBootstrapTokens(BootStrapper.java:174) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:929) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:697) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:582) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:392) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:601) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:735) > [main/:na] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11974) Failed assert causes OutboundTcpConnection to exit
[ https://issues.apache.org/jira/browse/CASSANDRA-11974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Edward Capriolo reassigned CASSANDRA-11974: --- Assignee: Edward Capriolo > Failed assert causes OutboundTcpConnection to exit > -- > > Key: CASSANDRA-11974 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11974 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Sean Thornton >Assignee: Edward Capriolo > > I am seeing the following in a client's cluster: > {noformat} > ERROR [MessagingService-Outgoing-/10.0.0.1] 2016-06-06 03:38:19,305 > CassandraDaemon.java:229 - Exception in thread > Thread[MessagingService-Outgoing-/10.0.0.1,5,main] > java.lang.AssertionError: 635174 > at > org.apache.cassandra.utils.ByteBufferUtil.writeWithShortLength(ByteBufferUtil.java:290) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.composites.AbstractCType$Serializer.serialize(AbstractCType.java:392) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.composites.AbstractCType$Serializer.serialize(AbstractCType.java:381) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.filter.ColumnSlice$Serializer.serialize(ColumnSlice.java:271) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.filter.ColumnSlice$Serializer.serialize(ColumnSlice.java:259) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.filter.SliceQueryFilter$Serializer.serialize(SliceQueryFilter.java:503) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.filter.SliceQueryFilter$Serializer.serialize(SliceQueryFilter.java:490) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.SliceFromReadCommandSerializer.serialize(SliceFromReadCommand.java:168) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.ReadCommandSerializer.serialize(ReadCommand.java:143) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.db.ReadCommandSerializer.serialize(ReadCommand.java:132) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:121) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:330) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:282) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > at > org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:218) > ~[cassandra-all-2.1.12.1046.jar:2.1.12.1046] > {noformat} > Obviously they somehow exceeded a 64K limit (quick and dirty suspects - > https://docs.datastax.com/en/cql/3.1/cql/cql_reference/refLimits.html) but > that is neither here nor there. > The problem I see when this happens is > {{ByteBufferUtil.writeWithShortLength}} can throw a > {{java.lang.AssertionError}} which is a true {{Error}} that bubbles up and > totally bypasses the {{catch (Exception e)}} clause in the message processing > loop in {{OutboundTcpConnection.run()}} _which causes the thread to exit and > that node to no longer communicate outgoing messages to other nodes_. > At least from my perspective, there are two things I would like to see > handled differently - > * In the event of _any_ problem, I would like to see whatever details > possible be logged about the problem Message - partition key, CF data, > anything. Right now it can be very difficult to track this down > * The {{java.lang.Error}} possibility needs to be handled somehow. If it's > an assertion error, it seems like we could continue the processing loop. But > shutting down the JVM would be better than what I get now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12089) Update metrics-reporter dependencies
[ https://issues.apache.org/jira/browse/CASSANDRA-12089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527277#comment-15527277 ] Michał Matłoka commented on CASSANDRA-12089: ping [~tjake] :) > Update metrics-reporter dependencies > > > Key: CASSANDRA-12089 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12089 > Project: Cassandra > Issue Type: Improvement >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Minor > Fix For: 2.1.x > > Attachments: 12089-trunk.txt > > > Proposal to update the metrics-reporter jars. > Upcoming versions (>=3.0.2) of > [metrics-reporter-config|https://github.com/addthis/metrics-reporter-config] > should support prometheus and maybe also riemann (in v3). > Relevant PRs: > https://github.com/addthis/metrics-reporter-config/pull/26 > https://github.com/addthis/metrics-reporter-config/pull/27 > reporter-config 3.0.2+ can also be used in 2.1. Therefore it would be nice to > have also update the jars in 2.1. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dikang Gu updated CASSANDRA-11550: -- Status: Patch Available (was: Open) [~krummas], Here is a patch based on latest trunk, do you mind to take a look? https://github.com/DikangGu/cassandra/commit/2242231bc575acd42d0ed5a4112fee5db32012d3 Thanks! > Make the fanout size for LeveledCompactionStrategy to be configurable > - > > Key: CASSANDRA-11550 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11550 > Project: Cassandra > Issue Type: New Feature > Components: Compaction >Reporter: Dikang Gu >Assignee: Dikang Gu > Labels: lcs > Fix For: 3.x > > > Currently, the fanout size for LeveledCompactionStrategy is hard coded in the > system (10). It would be useful to make the fanout size to be tunable, so > that we can change it according to different use cases. > Further more, we can change the size dynamically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11550) Make the fanout size for LeveledCompactionStrategy to be configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-11550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dikang Gu updated CASSANDRA-11550: -- Attachment: (was: 0001-make-fanout-size-for-leveledcompactionstrategy-to-be.patch) > Make the fanout size for LeveledCompactionStrategy to be configurable > - > > Key: CASSANDRA-11550 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11550 > Project: Cassandra > Issue Type: New Feature > Components: Compaction >Reporter: Dikang Gu >Assignee: Dikang Gu > Labels: lcs > Fix For: 3.x > > > Currently, the fanout size for LeveledCompactionStrategy is hard coded in the > system (10). It would be useful to make the fanout size to be tunable, so > that we can change it according to different use cases. > Further more, we can change the size dynamically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/2.2.8-tentative [deleted] e9fe96f40
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/cassandra-2.2.8 [created] 5a71132ac
[jira] [Commented] (CASSANDRA-9038) Atomic batches and single row atomicity appear to have no test coverage
[ https://issues.apache.org/jira/browse/CASSANDRA-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526968#comment-15526968 ] Edward Capriolo commented on CASSANDRA-9038: [~aweisberg] I have just done some work to demonstrate using black-box testing some aspects of how batches work. https://github.com/edwardcapriolo/ec/tree/master/src/test/java/Base/batch The challenge as I see it, how does one prove atomicity. Trying an operation N thousand times and asserting a result is slow and not really a "proof" of anything. (I am not a fan of mockito) but I would say the only thing we can assert or prove: * If we were able to place mocks in Memtables * issue statements like batch-mutations, possibly a batch across 10 row keys * assert that specific methods were called a specific number of times Being that I am not very familiar with any of the api's this could be a vast oversimplification of the process. Maybe someone else can chime in and say what specifically we could do.My worry is to add some mock test that always passes because of a limited scope and is only a burden that has to be changed with api changes. > Atomic batches and single row atomicity appear to have no test coverage > --- > > Key: CASSANDRA-9038 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9038 > Project: Cassandra > Issue Type: Test >Reporter: Ariel Weisberg > > Leaving the solution to this up to the assignee. It seems like this is a > guarantee that should be checked. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10825) OverloadedException is untested
[ https://issues.apache.org/jira/browse/CASSANDRA-10825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526848#comment-15526848 ] Edward Capriolo commented on CASSANDRA-10825: - https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:CASSANDRA-10825-3.0.9?expand=1 and https://github.com/apache/cassandra/compare/trunk...edwardcapriolo:CASSANDRA-10825-2.2.6?expand=1 https://github.com/edwardcapriolo/cassandra/tree/CASSANDRA-10825 <--trunk > OverloadedException is untested > --- > > Key: CASSANDRA-10825 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10825 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths >Reporter: Ariel Weisberg >Assignee: Edward Capriolo > Attachments: jmx-hint.png > > > If you grep test/src and cassandra-dtest you will find that the string > OverloadedException doesn't appear anywhere. > In CASSANDRA-10477 it was found that there were cases where Paxos should > back-pressure and throw OverloadedException but didn't. > If OverloadedException is used for functional purposes then we should test > that it is thrown under expected conditions. If there are behaviors driven by > catching or tracking OverloadedException we should test those as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11218) Prioritize Secondary Index rebuild
[ https://issues.apache.org/jira/browse/CASSANDRA-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-11218: Reviewer: Sam Tunnicliffe (was: Marcus Eriksson) > Prioritize Secondary Index rebuild > -- > > Key: CASSANDRA-11218 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11218 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: sankalp kohli >Assignee: Jeff Jirsa >Priority: Minor > > We have seen that secondary index rebuild get stuck behind other compaction > during a bootstrap and other operations. This causes things to not finish. We > should prioritize index rebuild via a separate thread pool or using a > priority queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11218) Prioritize Secondary Index rebuild
[ https://issues.apache.org/jira/browse/CASSANDRA-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526789#comment-15526789 ] Joshua McKenzie commented on CASSANDRA-11218: - [~beobal] to review. > Prioritize Secondary Index rebuild > -- > > Key: CASSANDRA-11218 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11218 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: sankalp kohli >Assignee: Jeff Jirsa >Priority: Minor > > We have seen that secondary index rebuild get stuck behind other compaction > during a bootstrap and other operations. This causes things to not finish. We > should prioritize index rebuild via a separate thread pool or using a > priority queue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12248) Allow tuning compaction thread count at runtime
[ https://issues.apache.org/jira/browse/CASSANDRA-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526778#comment-15526778 ] Dikang Gu commented on CASSANDRA-12248: --- [~tjake], I thought about it before, and I checked the code in `ThreadPoolExecutor.java`, the only check there is {code} public void setMaximumPoolSize(int maximumPoolSize) { if (maximumPoolSize <= 0 || maximumPoolSize < corePoolSize) throw new IllegalArgumentException(); {code} It does not allow set maximumPoolSize less than corePoolSize, which will never happen in my path. But anyway, I will do more testing see if it's really a problem. Thanks! > Allow tuning compaction thread count at runtime > --- > > Key: CASSANDRA-12248 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12248 > Project: Cassandra > Issue Type: Improvement >Reporter: Tom van der Woerdt >Assignee: Dikang Gu >Priority: Minor > Fix For: 3.10 > > > While bootstrapping new nodes it can take a significant amount of time to > catch up on compaction or 2i builds. In these cases it would be convenient to > have a nodetool command that allows changing the number of concurrent > compaction jobs to the amount of cores on the machine. > Alternatively, an even better variant of this would be to have a setting > "bootstrap_max_concurrent_compactors" which overrides the normal setting > during bootstrap only. Saves me from having to write a script that does it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12605) Timestamp-order searching of sstables does not handle non-frozen UDTs, frozen collections correctly
[ https://issues.apache.org/jira/browse/CASSANDRA-12605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-12605: Resolution: Fixed Fix Version/s: 3.0.10 3.10 Status: Resolved (was: Ready to Commit) Thanks, committed as {{21d8a7d3bd5b9ec49f486c3c7a816939c4040686}} to 3.0 and merged up to trunk. > Timestamp-order searching of sstables does not handle non-frozen UDTs, frozen > collections correctly > --- > > Key: CASSANDRA-12605 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12605 > Project: Cassandra > Issue Type: Bug >Reporter: Tyler Hobbs >Assignee: Tyler Hobbs > Fix For: 3.10, 3.0.10 > > > {{SinglePartitionReadCommand.queryNeitherCountersNorCollections()}} is used > to determine whether we can search sstables in timestamp order. We cannot > use this optimization when there are multicell values (such as unfrozen > collections or UDTs). However, this method only checks > {{column.type.isCollection() || column.type.isCounter()}}. Instead, it > should check {{column.type.isMulticell() || column.type.isCounter()}}. > This has two implications: > * We are using timestamp-order searching when querying non-frozen UDTs, which > can lead to incorrect/stale results being returned. > * We are not taking advantage of this optimization when querying frozen > collections. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/3] cassandra git commit: Fix time-order query check for non-frozen UDTs, frozen collections
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 8aa6f29ce -> 21d8a7d3b refs/heads/trunk 5692c59d1 -> 12f5ca36f Fix time-order query check for non-frozen UDTs, frozen collections Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-12605 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21d8a7d3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21d8a7d3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21d8a7d3 Branch: refs/heads/cassandra-3.0 Commit: 21d8a7d3bd5b9ec49f486c3c7a816939c4040686 Parents: 8aa6f29 Author: Tyler Hobbs Authored: Tue Sep 27 11:59:53 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:59:53 2016 -0500 -- CHANGES.txt | 2 ++ .../cassandra/db/SinglePartitionReadCommand.java| 16 2 files changed, 10 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/21d8a7d3/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 576dfb5..4280abd 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 3.0.10 + * Fix potentially incomplete non-frozen UDT values when querying with the + full primary key specified (CASSANDRA-12605) * Skip writing MV mutations to commitlog on mutation.applyUnsafe() (CASSANDRA-11670) * Establish consistent distinction between non-existing partition and NULL value for LWTs on static columns (CASSANDRA-12060) * Extend ColumnIdentifier.internedInstances key to include the type that generated the byte buffer (CASSANDRA-12516) http://git-wip-us.apache.org/repos/asf/cassandra/blob/21d8a7d3/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java -- diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java index 886a918..23b02f3 100644 --- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java +++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java @@ -511,11 +511,11 @@ public class SinglePartitionReadCommand extends ReadCommand * 2) If we have a name filter (so we query specific rows), we can make a bet: that all column for all queried row * will have data in the most recent sstable(s), thus saving us from reading older ones. This does imply we * have a way to guarantee we have all the data for what is queried, which is only possible for name queries - * and if we have neither collections nor counters (indeed, for a collection, we can't guarantee an older sstable - * won't have some elements that weren't in the most recent sstables, and counters are intrinsically a collection - * of shards so have the same problem). + * and if we have neither non-frozen collections/UDTs nor counters (indeed, for a non-frozen collection or UDT, + * we can't guarantee an older sstable won't have some elements that weren't in the most recent sstables, + * and counters are intrinsically a collection of shards and so have the same problem). */ -if (clusteringIndexFilter() instanceof ClusteringIndexNamesFilter && queryNeitherCountersNorCollections()) +if (clusteringIndexFilter() instanceof ClusteringIndexNamesFilter && !queriesMulticellType()) return queryMemtableAndSSTablesInTimestampOrder(cfs, copyOnHeap, (ClusteringIndexNamesFilter)clusteringIndexFilter()); Tracing.trace("Acquiring sstable references"); @@ -662,14 +662,14 @@ public class SinglePartitionReadCommand extends ReadCommand return clusteringIndexFilter().shouldInclude(sstable); } -private boolean queryNeitherCountersNorCollections() +private boolean queriesMulticellType() { for (ColumnDefinition column : columnFilter().fetchedColumns()) { -if (column.type.isCollection() || column.type.isCounter()) -return false; +if (column.type.isMultiCell() || column.type.isCounter()) +return true; } -return true; +return false; } /**
[3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/12f5ca36 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/12f5ca36 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/12f5ca36 Branch: refs/heads/trunk Commit: 12f5ca36ffab227a1531a554b00cf83d898f9f28 Parents: 5692c59 21d8a7d Author: Tyler Hobbs Authored: Tue Sep 27 12:04:35 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 12:04:35 2016 -0500 -- CHANGES.txt | 2 ++ .../cassandra/db/SinglePartitionReadCommand.java| 16 2 files changed, 10 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/12f5ca36/CHANGES.txt -- diff --cc CHANGES.txt index b6a687d,4280abd..75e7d2a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,74 -1,6 +1,76 @@@ -3.0.10 +3.10 + * Tune compaction thread count via nodetool (CASSANDRA-12248) + * Add +=/-= shortcut syntax for update queries (CASSANDRA-12232) + * Include repair session IDs in repair start message (CASSANDRA-12532) + * Add a blocking task to Index, run before joining the ring (CASSANDRA-12039) + * Fix NPE when using CQLSSTableWriter (CASSANDRA-12667) + * Support optional backpressure strategies at the coordinator (CASSANDRA-9318) + * Make randompartitioner work with new vnode allocation (CASSANDRA-12647) + * Fix cassandra-stress graphing (CASSANDRA-12237) + * Allow filtering on partition key columns for queries without secondary indexes (CASSANDRA-11031) + * Fix Cassandra Stress reporting thread model and precision (CASSANDRA-12585) + * Add JMH benchmarks.jar (CASSANDRA-12586) + * Add row offset support to SASI (CASSANDRA-11990) + * Cleanup uses of AlterTableStatementColumn (CASSANDRA-12567) + * Add keep-alive to streaming (CASSANDRA-11841) + * Tracing payload is passed through newSession(..) (CASSANDRA-11706) + * avoid deleting non existing sstable files and improve related log messages (CASSANDRA-12261) + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486) + * Retry all internode messages once after a connection is + closed and reopened (CASSANDRA-12192) + * Add support to rebuild from targeted replica (CASSANDRA-9875) + * Add sequence distribution type to cassandra stress (CASSANDRA-12490) + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154) + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474) + * Extend read/write failure messages with a map of replica addresses + to error codes in the v5 native protocol (CASSANDRA-12311) + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374) + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 12550) + * Fix clustering indexes in presence of static columns in SASI (CASSANDRA-12378) + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223) + * Added slow query log (CASSANDRA-12403) + * Count full coordinated request against timeout (CASSANDRA-12256) + * Allow TTL with null value on insert and update (CASSANDRA-12216) + * Make decommission operation resumable (CASSANDRA-12008) + * Add support to one-way targeted repair (CASSANDRA-9876) + * Remove clientutil jar (CASSANDRA-11635) + * Fix compaction throughput throttle (CASSANDRA-12366) + * Delay releasing Memtable memory on flush until PostFlush has finished running (CASSANDRA-12358) + * Cassandra stress should dump all setting on startup (CASSANDRA-11914) + * Make it possible to compact a given token range (CASSANDRA-10643) + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179) + * Collect metrics on queries by consistency level (CASSANDRA-7384) + * Add support for GROUP BY to SELECT statement (CASSANDRA-10707) + * Deprecate memtable_cleanup_threshold and update default for memtable_flush_writers (CASSANDRA-12228) + * Upgrade to OHC 0.4.4 (CASSANDRA-12133) + * Add version command to cassandra-stress (CASSANDRA-12258) + * Create compaction-stress tool (CASSANDRA-11844) + * Garbage-collecting compaction operation and schema option (CASSANDRA-7019) + * Add beta protocol flag for v5 native protocol (CASSANDRA-12142) + * Support filtering on non-PRIMARY KEY columns in the CREATE + MATERIALIZED VIEW statement's WHERE clause (CASSANDRA-10368) + * Unify STDOUT and SYSTEMLOG logback format (CASSANDRA-12004) + * COPY FROM should raise error for non-existing input files (CASSANDRA-12174) + * Faster write path (CASSANDRA-12269) + * Option to leave omitted columns in INSERT JSON unset (CASSANDRA-11424) + * Support json/yaml output in nodetool tpstats (CASSANDRA-12035) + * Expose metrics for successful/fai
[2/3] cassandra git commit: Fix time-order query check for non-frozen UDTs, frozen collections
Fix time-order query check for non-frozen UDTs, frozen collections Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-12605 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/21d8a7d3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/21d8a7d3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/21d8a7d3 Branch: refs/heads/trunk Commit: 21d8a7d3bd5b9ec49f486c3c7a816939c4040686 Parents: 8aa6f29 Author: Tyler Hobbs Authored: Tue Sep 27 11:59:53 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:59:53 2016 -0500 -- CHANGES.txt | 2 ++ .../cassandra/db/SinglePartitionReadCommand.java| 16 2 files changed, 10 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/21d8a7d3/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 576dfb5..4280abd 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 3.0.10 + * Fix potentially incomplete non-frozen UDT values when querying with the + full primary key specified (CASSANDRA-12605) * Skip writing MV mutations to commitlog on mutation.applyUnsafe() (CASSANDRA-11670) * Establish consistent distinction between non-existing partition and NULL value for LWTs on static columns (CASSANDRA-12060) * Extend ColumnIdentifier.internedInstances key to include the type that generated the byte buffer (CASSANDRA-12516) http://git-wip-us.apache.org/repos/asf/cassandra/blob/21d8a7d3/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java -- diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java index 886a918..23b02f3 100644 --- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java +++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java @@ -511,11 +511,11 @@ public class SinglePartitionReadCommand extends ReadCommand * 2) If we have a name filter (so we query specific rows), we can make a bet: that all column for all queried row * will have data in the most recent sstable(s), thus saving us from reading older ones. This does imply we * have a way to guarantee we have all the data for what is queried, which is only possible for name queries - * and if we have neither collections nor counters (indeed, for a collection, we can't guarantee an older sstable - * won't have some elements that weren't in the most recent sstables, and counters are intrinsically a collection - * of shards so have the same problem). + * and if we have neither non-frozen collections/UDTs nor counters (indeed, for a non-frozen collection or UDT, + * we can't guarantee an older sstable won't have some elements that weren't in the most recent sstables, + * and counters are intrinsically a collection of shards and so have the same problem). */ -if (clusteringIndexFilter() instanceof ClusteringIndexNamesFilter && queryNeitherCountersNorCollections()) +if (clusteringIndexFilter() instanceof ClusteringIndexNamesFilter && !queriesMulticellType()) return queryMemtableAndSSTablesInTimestampOrder(cfs, copyOnHeap, (ClusteringIndexNamesFilter)clusteringIndexFilter()); Tracing.trace("Acquiring sstable references"); @@ -662,14 +662,14 @@ public class SinglePartitionReadCommand extends ReadCommand return clusteringIndexFilter().shouldInclude(sstable); } -private boolean queryNeitherCountersNorCollections() +private boolean queriesMulticellType() { for (ColumnDefinition column : columnFilter().fetchedColumns()) { -if (column.type.isCollection() || column.type.isCounter()) -return false; +if (column.type.isMultiCell() || column.type.isCounter()) +return true; } -return true; +return false; } /**
[01/10] cassandra git commit: Treat IN values as a set instead of a list
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 6fb89b905 -> cdd535fca refs/heads/cassandra-2.2 6dc595dd2 -> 738a57992 refs/heads/cassandra-3.0 b7fc5dc1c -> 8aa6f29ce refs/heads/trunk 7bef41856 -> 5692c59d1 Treat IN values as a set instead of a list Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-12420 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cdd535fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cdd535fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cdd535fc Branch: refs/heads/cassandra-2.1 Commit: cdd535fcac4ba79bb371e8373c6504d9e3978853 Parents: 6fb89b9 Author: Tyler Hobbs Authored: Tue Sep 27 11:51:41 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:51:41 2016 -0500 -- CHANGES.txt | 3 ++ NEWS.txt| 7 +++ .../cql3/statements/SelectStatement.java| 45 +--- .../entities/FrozenCollectionsTest.java | 8 ++-- .../validation/operations/SelectLimitTest.java | 2 +- .../SelectMultiColumnRelationTest.java | 6 +-- .../operations/SelectOrderByTest.java | 8 ++-- 7 files changed, 41 insertions(+), 38 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1438e98..b778444 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,7 @@ 2.1.16 + * Avoid infinitely looping result set when paging SELECT queries with + an IN clause with duplicate keys by treating the IN values as a set instead + of a list (CASSANDRA-12420) * Add system property to set the max number of native transport requests in queue (CASSANDRA-11363) * Include column family parameter when -st and -et are provided (CASSANDRA-11866) * Fix queries with empty ByteBuffer values in clustering column restrictions (CASSANDRA-12127) http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 6a70adc..2db34ed 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -18,6 +18,13 @@ using the provided 'sstableupgrade' tool. Upgrading - +- Duplicate partition keys in SELECT statement IN clauses will now be + filtered out, meaning that duplicate results will no longer be returned. + Futhermore, the partitions will be returned in the order of the sorted + partition keys instead of the order of the IN values; this matches the + behavior of Cassandra 2.2+. This was necessary to avoid an infinitely + looping result set when combined with paging under some circumstances. + See CASSANDRA-12420 for details. - The ReversedType behaviour has been corrected for clustering columns of BYTES type containing empty value. Scrub should be run on the existing SSTables containing a descending clustering column of BYTES type to correct http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java index 40f3f33..fe63b44 100644 --- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java @@ -71,13 +71,6 @@ public class SelectStatement implements CQLStatement private static final int DEFAULT_COUNT_PAGE_SIZE = 1; -/** - * In the current version a query containing duplicate values in an IN restriction on the partition key will - * cause the same record to be returned multiple time. This behavior will be changed in 3.0 but until then - * we will log a warning the first time this problem occurs. - */ -private static volatile boolean HAS_LOGGED_WARNING_FOR_IN_RESTRICTION_WITH_DUPLICATES; - private final int boundTerms; public final CFMetaData cfm; public final Parameters parameters; @@ -682,9 +675,9 @@ public class SelectStatement implements CQLStatement : limit; } -private Collection getKeys(final QueryOptions options) throws InvalidRequestException +private NavigableSet getKeys(final QueryOptions options) throws InvalidRequestException { -List keys = new ArrayList(); +TreeSet sortedKeys = new TreeSet<>(cfm.getKeyValidator()); CBuilder builder = cfm.getKeyValidatorAsCType().builder(); for (ColumnDefinition def : cfm.partitionKeyCol
[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8aa6f29c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8aa6f29c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8aa6f29c Branch: refs/heads/cassandra-3.0 Commit: 8aa6f29ce63d7e94437b7924e6e2442d44cdaa79 Parents: b7fc5dc 738a579 Author: Tyler Hobbs Authored: Tue Sep 27 11:52:49 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:52:49 2016 -0500 -- --
[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/738a5799 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/738a5799 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/738a5799 Branch: refs/heads/cassandra-3.0 Commit: 738a57992cd725feda3aad8f9dcfb40a0e82823d Parents: 6dc595d cdd535f Author: Tyler Hobbs Authored: Tue Sep 27 11:52:31 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:52:31 2016 -0500 -- --
[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8aa6f29c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8aa6f29c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8aa6f29c Branch: refs/heads/trunk Commit: 8aa6f29ce63d7e94437b7924e6e2442d44cdaa79 Parents: b7fc5dc 738a579 Author: Tyler Hobbs Authored: Tue Sep 27 11:52:49 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:52:49 2016 -0500 -- --
[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/738a5799 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/738a5799 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/738a5799 Branch: refs/heads/trunk Commit: 738a57992cd725feda3aad8f9dcfb40a0e82823d Parents: 6dc595d cdd535f Author: Tyler Hobbs Authored: Tue Sep 27 11:52:31 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:52:31 2016 -0500 -- --
[jira] [Updated] (CASSANDRA-12420) Duplicated Key in IN clause with a small fetch size will run forever
[ https://issues.apache.org/jira/browse/CASSANDRA-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-12420: Resolution: Fixed Fix Version/s: (was: 2.1.x) 2.1.16 Status: Resolved (was: Patch Available) Thank you, committed with the nits fixed to 2.1 as {{cdd535fcac4ba79bb371e8373c6504d9e3978853}} and merged to 2.2 with {{-s ours}}. > Duplicated Key in IN clause with a small fetch size will run forever > > > Key: CASSANDRA-12420 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12420 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: cassandra 2.1.14, driver 2.1.7.1 >Reporter: ZhaoYang >Assignee: Tyler Hobbs > Labels: doc-impacting > Fix For: 2.1.16 > > Attachments: CASSANDRA-12420.patch > > > This can be easily reproduced and fetch size is smaller than the correct > number of rows. > A table has 2 partition key, 1 clustering key, 1 column. > >Select select = QueryBuilder.select().from("ks", "cf"); > >select.where().and(QueryBuilder.eq("a", 1)); > >select.where().and(QueryBuilder.in("b", Arrays.asList(1, 1, 1))); > >select.setFetchSize(5); > Now we put a distinct method in client side to eliminate the duplicated key, > but it's better to fix inside Cassandra. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[04/10] cassandra git commit: Treat IN values as a set instead of a list
Treat IN values as a set instead of a list Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-12420 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cdd535fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cdd535fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cdd535fc Branch: refs/heads/trunk Commit: cdd535fcac4ba79bb371e8373c6504d9e3978853 Parents: 6fb89b9 Author: Tyler Hobbs Authored: Tue Sep 27 11:51:41 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:51:41 2016 -0500 -- CHANGES.txt | 3 ++ NEWS.txt| 7 +++ .../cql3/statements/SelectStatement.java| 45 +--- .../entities/FrozenCollectionsTest.java | 8 ++-- .../validation/operations/SelectLimitTest.java | 2 +- .../SelectMultiColumnRelationTest.java | 6 +-- .../operations/SelectOrderByTest.java | 8 ++-- 7 files changed, 41 insertions(+), 38 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1438e98..b778444 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,7 @@ 2.1.16 + * Avoid infinitely looping result set when paging SELECT queries with + an IN clause with duplicate keys by treating the IN values as a set instead + of a list (CASSANDRA-12420) * Add system property to set the max number of native transport requests in queue (CASSANDRA-11363) * Include column family parameter when -st and -et are provided (CASSANDRA-11866) * Fix queries with empty ByteBuffer values in clustering column restrictions (CASSANDRA-12127) http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 6a70adc..2db34ed 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -18,6 +18,13 @@ using the provided 'sstableupgrade' tool. Upgrading - +- Duplicate partition keys in SELECT statement IN clauses will now be + filtered out, meaning that duplicate results will no longer be returned. + Futhermore, the partitions will be returned in the order of the sorted + partition keys instead of the order of the IN values; this matches the + behavior of Cassandra 2.2+. This was necessary to avoid an infinitely + looping result set when combined with paging under some circumstances. + See CASSANDRA-12420 for details. - The ReversedType behaviour has been corrected for clustering columns of BYTES type containing empty value. Scrub should be run on the existing SSTables containing a descending clustering column of BYTES type to correct http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java index 40f3f33..fe63b44 100644 --- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java @@ -71,13 +71,6 @@ public class SelectStatement implements CQLStatement private static final int DEFAULT_COUNT_PAGE_SIZE = 1; -/** - * In the current version a query containing duplicate values in an IN restriction on the partition key will - * cause the same record to be returned multiple time. This behavior will be changed in 3.0 but until then - * we will log a warning the first time this problem occurs. - */ -private static volatile boolean HAS_LOGGED_WARNING_FOR_IN_RESTRICTION_WITH_DUPLICATES; - private final int boundTerms; public final CFMetaData cfm; public final Parameters parameters; @@ -682,9 +675,9 @@ public class SelectStatement implements CQLStatement : limit; } -private Collection getKeys(final QueryOptions options) throws InvalidRequestException +private NavigableSet getKeys(final QueryOptions options) throws InvalidRequestException { -List keys = new ArrayList(); +TreeSet sortedKeys = new TreeSet<>(cfm.getKeyValidator()); CBuilder builder = cfm.getKeyValidatorAsCType().builder(); for (ColumnDefinition def : cfm.partitionKeyColumns()) { @@ -695,18 +688,14 @@ public class SelectStatement implements CQLStatement if (builder.remainingCount() == 1) { -if (values.size() > 1 && !HAS_LOGGED_WARNING_FOR_IN_RESTRICTION_W
[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/738a5799 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/738a5799 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/738a5799 Branch: refs/heads/cassandra-2.2 Commit: 738a57992cd725feda3aad8f9dcfb40a0e82823d Parents: 6dc595d cdd535f Author: Tyler Hobbs Authored: Tue Sep 27 11:52:31 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:52:31 2016 -0500 -- --
[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5692c59d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5692c59d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5692c59d Branch: refs/heads/trunk Commit: 5692c59d14dbc81e0a2ee6d9d0fbd7505d503ab5 Parents: 7bef418 8aa6f29 Author: Tyler Hobbs Authored: Tue Sep 27 11:53:45 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:53:45 2016 -0500 -- --
[02/10] cassandra git commit: Treat IN values as a set instead of a list
Treat IN values as a set instead of a list Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-12420 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cdd535fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cdd535fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cdd535fc Branch: refs/heads/cassandra-2.2 Commit: cdd535fcac4ba79bb371e8373c6504d9e3978853 Parents: 6fb89b9 Author: Tyler Hobbs Authored: Tue Sep 27 11:51:41 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:51:41 2016 -0500 -- CHANGES.txt | 3 ++ NEWS.txt| 7 +++ .../cql3/statements/SelectStatement.java| 45 +--- .../entities/FrozenCollectionsTest.java | 8 ++-- .../validation/operations/SelectLimitTest.java | 2 +- .../SelectMultiColumnRelationTest.java | 6 +-- .../operations/SelectOrderByTest.java | 8 ++-- 7 files changed, 41 insertions(+), 38 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1438e98..b778444 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,7 @@ 2.1.16 + * Avoid infinitely looping result set when paging SELECT queries with + an IN clause with duplicate keys by treating the IN values as a set instead + of a list (CASSANDRA-12420) * Add system property to set the max number of native transport requests in queue (CASSANDRA-11363) * Include column family parameter when -st and -et are provided (CASSANDRA-11866) * Fix queries with empty ByteBuffer values in clustering column restrictions (CASSANDRA-12127) http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 6a70adc..2db34ed 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -18,6 +18,13 @@ using the provided 'sstableupgrade' tool. Upgrading - +- Duplicate partition keys in SELECT statement IN clauses will now be + filtered out, meaning that duplicate results will no longer be returned. + Futhermore, the partitions will be returned in the order of the sorted + partition keys instead of the order of the IN values; this matches the + behavior of Cassandra 2.2+. This was necessary to avoid an infinitely + looping result set when combined with paging under some circumstances. + See CASSANDRA-12420 for details. - The ReversedType behaviour has been corrected for clustering columns of BYTES type containing empty value. Scrub should be run on the existing SSTables containing a descending clustering column of BYTES type to correct http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java index 40f3f33..fe63b44 100644 --- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java @@ -71,13 +71,6 @@ public class SelectStatement implements CQLStatement private static final int DEFAULT_COUNT_PAGE_SIZE = 1; -/** - * In the current version a query containing duplicate values in an IN restriction on the partition key will - * cause the same record to be returned multiple time. This behavior will be changed in 3.0 but until then - * we will log a warning the first time this problem occurs. - */ -private static volatile boolean HAS_LOGGED_WARNING_FOR_IN_RESTRICTION_WITH_DUPLICATES; - private final int boundTerms; public final CFMetaData cfm; public final Parameters parameters; @@ -682,9 +675,9 @@ public class SelectStatement implements CQLStatement : limit; } -private Collection getKeys(final QueryOptions options) throws InvalidRequestException +private NavigableSet getKeys(final QueryOptions options) throws InvalidRequestException { -List keys = new ArrayList(); +TreeSet sortedKeys = new TreeSet<>(cfm.getKeyValidator()); CBuilder builder = cfm.getKeyValidatorAsCType().builder(); for (ColumnDefinition def : cfm.partitionKeyColumns()) { @@ -695,18 +688,14 @@ public class SelectStatement implements CQLStatement if (builder.remainingCount() == 1) { -if (values.size() > 1 && !HAS_LOGGED_WARNING_FOR_IN_RESTR
[03/10] cassandra git commit: Treat IN values as a set instead of a list
Treat IN values as a set instead of a list Patch by Tyler Hobbs; reviewed by Benjamin Lerer for CASSANDRA-12420 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cdd535fc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cdd535fc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cdd535fc Branch: refs/heads/cassandra-3.0 Commit: cdd535fcac4ba79bb371e8373c6504d9e3978853 Parents: 6fb89b9 Author: Tyler Hobbs Authored: Tue Sep 27 11:51:41 2016 -0500 Committer: Tyler Hobbs Committed: Tue Sep 27 11:51:41 2016 -0500 -- CHANGES.txt | 3 ++ NEWS.txt| 7 +++ .../cql3/statements/SelectStatement.java| 45 +--- .../entities/FrozenCollectionsTest.java | 8 ++-- .../validation/operations/SelectLimitTest.java | 2 +- .../SelectMultiColumnRelationTest.java | 6 +-- .../operations/SelectOrderByTest.java | 8 ++-- 7 files changed, 41 insertions(+), 38 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1438e98..b778444 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,7 @@ 2.1.16 + * Avoid infinitely looping result set when paging SELECT queries with + an IN clause with duplicate keys by treating the IN values as a set instead + of a list (CASSANDRA-12420) * Add system property to set the max number of native transport requests in queue (CASSANDRA-11363) * Include column family parameter when -st and -et are provided (CASSANDRA-11866) * Fix queries with empty ByteBuffer values in clustering column restrictions (CASSANDRA-12127) http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index 6a70adc..2db34ed 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -18,6 +18,13 @@ using the provided 'sstableupgrade' tool. Upgrading - +- Duplicate partition keys in SELECT statement IN clauses will now be + filtered out, meaning that duplicate results will no longer be returned. + Futhermore, the partitions will be returned in the order of the sorted + partition keys instead of the order of the IN values; this matches the + behavior of Cassandra 2.2+. This was necessary to avoid an infinitely + looping result set when combined with paging under some circumstances. + See CASSANDRA-12420 for details. - The ReversedType behaviour has been corrected for clustering columns of BYTES type containing empty value. Scrub should be run on the existing SSTables containing a descending clustering column of BYTES type to correct http://git-wip-us.apache.org/repos/asf/cassandra/blob/cdd535fc/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java index 40f3f33..fe63b44 100644 --- a/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/SelectStatement.java @@ -71,13 +71,6 @@ public class SelectStatement implements CQLStatement private static final int DEFAULT_COUNT_PAGE_SIZE = 1; -/** - * In the current version a query containing duplicate values in an IN restriction on the partition key will - * cause the same record to be returned multiple time. This behavior will be changed in 3.0 but until then - * we will log a warning the first time this problem occurs. - */ -private static volatile boolean HAS_LOGGED_WARNING_FOR_IN_RESTRICTION_WITH_DUPLICATES; - private final int boundTerms; public final CFMetaData cfm; public final Parameters parameters; @@ -682,9 +675,9 @@ public class SelectStatement implements CQLStatement : limit; } -private Collection getKeys(final QueryOptions options) throws InvalidRequestException +private NavigableSet getKeys(final QueryOptions options) throws InvalidRequestException { -List keys = new ArrayList(); +TreeSet sortedKeys = new TreeSet<>(cfm.getKeyValidator()); CBuilder builder = cfm.getKeyValidatorAsCType().builder(); for (ColumnDefinition def : cfm.partitionKeyColumns()) { @@ -695,18 +688,14 @@ public class SelectStatement implements CQLStatement if (builder.remainingCount() == 1) { -if (values.size() > 1 && !HAS_LOGGED_WARNING_FOR_IN_RESTR
[jira] [Updated] (CASSANDRA-12717) Fix IllegalArgumentException in CompactionTask
[ https://issues.apache.org/jira/browse/CASSANDRA-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yasuharu Goto updated CASSANDRA-12717: -- Status: Patch Available (was: Open) > Fix IllegalArgumentException in CompactionTask > -- > > Key: CASSANDRA-12717 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12717 > Project: Cassandra > Issue Type: Bug >Reporter: Yasuharu Goto >Assignee: Yasuharu Goto > > When I was ran LargePartitionsTest.test_11_1G at trunk, I found that this > test fails due to a java.lang.IllegalArgumentException during compaction. > This exception apparently happens when the compaction merges a large (>2GB) > partition. > {noformat} > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,074 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,437 ?:? - No segments in > reserve; creating a fresh one > WARN [CompactionExecutor:14] 2016-09-28 00:32:48,463 ?:? - Writing large > partition cql_test_keyspace/table_4:10 (1.004GiB) > ERROR [CompactionExecutor:14] 2016-09-28 00:32:49,734 ?:? - Fatal exception > in thread Thread[CompactionExecutor:14,1,main] > java.lang.IllegalArgumentException: Out of range: 2234434614 > at com.google.common.primitives.Ints.checkedCast(Ints.java:91) > ~[guava-18.0.jar:na] > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:206) > ~[main/:na] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) > ~[main/:na] > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) > ~[main/:na] > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:267) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_77] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_77] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:49,909 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,148 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,385 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,620 ?:? - No segments in > reserve; creating a fresh one > {noformat} > {noformat} > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.IllegalArgumentException: Out of range: 2540348821 > at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:393) > at > org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:695) > at > org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2066) > at > org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2061) > at org.apache.cassandra.cql3.CQLTester.compact(CQLTester.java:426) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.lambda$withPartitionSize$2(LargePartitionsTest.java:92) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.measured(LargePartitionsTest.java:50) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.withPartitionSize(LargePartitionsTest.java:90) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.test_11_1G(LargePartitionsTest.java:198) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) > at > org.j
[jira] [Updated] (CASSANDRA-12717) Fix IllegalArgumentException in CompactionTask
[ https://issues.apache.org/jira/browse/CASSANDRA-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yasuharu Goto updated CASSANDRA-12717: -- Description: When I was ran LargePartitionsTest.test_11_1G at trunk, I found that this test fails due to a java.lang.IllegalArgumentException during compaction. This exception apparently happens when the compaction merges a large (>2GB) partition. {noformat} DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,074 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,437 ?:? - No segments in reserve; creating a fresh one WARN [CompactionExecutor:14] 2016-09-28 00:32:48,463 ?:? - Writing large partition cql_test_keyspace/table_4:10 (1.004GiB) ERROR [CompactionExecutor:14] 2016-09-28 00:32:49,734 ?:? - Fatal exception in thread Thread[CompactionExecutor:14,1,main] java.lang.IllegalArgumentException: Out of range: 2234434614 at com.google.common.primitives.Ints.checkedCast(Ints.java:91) ~[guava-18.0.jar:na] at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:206) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) ~[main/:na] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:267) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_77] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:49,909 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,148 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,385 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,620 ?:? - No segments in reserve; creating a fresh one {noformat} {noformat} java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Out of range: 2540348821 at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51) at org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:393) at org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:695) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2066) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2061) at org.apache.cassandra.cql3.CQLTester.compact(CQLTester.java:426) at org.apache.cassandra.io.sstable.LargePartitionsTest.lambda$withPartitionSize$2(LargePartitionsTest.java:92) at org.apache.cassandra.io.sstable.LargePartitionsTest.measured(LargePartitionsTest.java:50) at org.apache.cassandra.io.sstable.LargePartitionsTest.withPartitionSize(LargePartitionsTest.java:90) at org.apache.cassandra.io.sstable.LargePartitionsTest.test_11_1G(LargePartitionsTest.java:198) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at com.intellij.junit4.JUnit4TestRunnerUtil$IgnoreIgnoredTestJUnit4ClassRunner.runChild(JUnit4TestRunnerUtil.java:358) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:1
[jira] [Commented] (CASSANDRA-9928) Add Support for multiple non-primary key columns in Materialized View primary keys
[ https://issues.apache.org/jira/browse/CASSANDRA-9928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526654#comment-15526654 ] Donovan Hsieh commented on CASSANDRA-9928: -- Whatever technical issues associated with race condition stated above and limit to just 1 non-PK column, imho, make MV seriously handicapped. If this limitation is not removed, I can't see any serious real world applications can use MV effectively. > Add Support for multiple non-primary key columns in Materialized View primary > keys > -- > > Key: CASSANDRA-9928 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9928 > Project: Cassandra > Issue Type: Improvement >Reporter: T Jake Luciani > Labels: materializedviews > Fix For: 3.x > > > Currently we don't allow > 1 non primary key from the base table in a MV > primary key. We should remove this restriction assuming we continue > filtering out nulls. With allowing nulls in the MV columns there are a lot > of multiplicative implications we need to think through. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12225) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Yeksigian updated CASSANDRA-12225: --- Resolution: Fixed Reviewer: Philip Thompson Fix Version/s: (was: 3.0.x) (was: 3.x) 3.0.10 3.10 Status: Resolved (was: Patch Available) Committed to dtest. > dtest failure in > materialized_views_test.TestMaterializedViews.clustering_column_test > - > > Key: CASSANDRA-12225 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12225 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Carl Yeksigian > Labels: dtest > Fix For: 3.10, 3.0.10 > > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/336/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test > Failed on CassCI build trunk_offheap_dtest #336 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/materialized_views_test.py", line > 321, in clustering_column_test > self.assertEqual(len(result), 2, "Expecting {} users, got {}".format(2, > len(result))) > File "/usr/lib/python2.7/unittest/case.py", line 513, in assertEqual > assertion_func(first, second, msg=msg) > File "/usr/lib/python2.7/unittest/case.py", line 506, in _baseAssertEqual > raise self.failureException(msg) > "Expecting 2 users, got 1 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12632) Failure in LogTransactionTest.testUnparsableFirstRecord-compression
[ https://issues.apache.org/jira/browse/CASSANDRA-12632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526625#comment-15526625 ] Edward Capriolo commented on CASSANDRA-12632: - +1 non binding. Question I notice several tests in this class that open a tidier but do not clean it. In what particular cases is it needed vs not? Should this be worked into a @before or @after annotation ? > Failure in LogTransactionTest.testUnparsableFirstRecord-compression > --- > > Key: CASSANDRA-12632 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12632 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Joel Knighton >Assignee: Stefania > Fix For: 3.0.x, 3.x > > > Stacktrace: > {code} > junit.framework.AssertionFailedError: > [/home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Index.db, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-TOC.txt, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Filter.db, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc-1-big-Data.db, > > /home/automaton/cassandra/build/test/cassandra/data:161/TransactionLogsTest/mockcf23-73ad523078d311e6985893d33dad3001/mc_txn_compaction_73af4e00-78d3-11e6-9858-93d33dad3001.log] > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1228) > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.assertFiles(LogTransactionTest.java:1196) > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.testCorruptRecord(LogTransactionTest.java:1040) > at > org.apache.cassandra.db.lifecycle.LogTransactionTest.testUnparsableFirstRecord(LogTransactionTest.java:988) > {code} > Example failure: > http://cassci.datastax.com/job/cassandra-3.9_testall/89/testReport/junit/org.apache.cassandra.db.lifecycle/LogTransactionTest/testUnparsableFirstRecord_compression/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12717) Fix IllegalArgumentException in CompactionTask
[ https://issues.apache.org/jira/browse/CASSANDRA-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526579#comment-15526579 ] Yasuharu Goto edited comment on CASSANDRA-12717 at 9/27/16 4:15 PM: Patch is here. Could you please review this? Fix IllegalArgumentException in CompactionTask https://github.com/matope/cassandra/commit/d6c40dd3d4d95dba8b9c3f88de1015315e45990d was (Author: yasuharu): Patch is here. Could you please review this? Fix IllegalArgumentException in CompactionTask https://github.com/matope/cassandra/commit/a9ccd9731e83fdd4148325c9a727b64e4982e2ba > Fix IllegalArgumentException in CompactionTask > -- > > Key: CASSANDRA-12717 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12717 > Project: Cassandra > Issue Type: Bug >Reporter: Yasuharu Goto >Assignee: Yasuharu Goto > > When I was ran LargePartitionsTest.test_11_1G at trunk, I found that this > test fails due to a java.lang.IllegalArgumentException during compaction > and, eventually fails. > This exception apparently happens when a compaction generates large sstable. > {noformat} > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,074 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,437 ?:? - No segments in > reserve; creating a fresh one > WARN [CompactionExecutor:14] 2016-09-28 00:32:48,463 ?:? - Writing large > partition cql_test_keyspace/table_4:10 (1.004GiB) > ERROR [CompactionExecutor:14] 2016-09-28 00:32:49,734 ?:? - Fatal exception > in thread Thread[CompactionExecutor:14,1,main] > java.lang.IllegalArgumentException: Out of range: 2234434614 > at com.google.common.primitives.Ints.checkedCast(Ints.java:91) > ~[guava-18.0.jar:na] > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:206) > ~[main/:na] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) > ~[main/:na] > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) > ~[main/:na] > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:267) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_77] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_77] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:49,909 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,148 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,385 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,620 ?:? - No segments in > reserve; creating a fresh one > {noformat} > {noformat} > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.IllegalArgumentException: Out of range: 2540348821 > at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:393) > at > org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:695) > at > org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2066) > at > org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2061) > at org.apache.cassandra.cql3.CQLTester.compact(CQLTester.java:426) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.lambda$withPartitionSize$2(LargePartitionsTest.java:92) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.measured(LargePartitionsTest.java:50) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.withPartitionSize(LargePartitionsTest.java:90) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.test_11_1G(LargePartitionsTest.java:198) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.
[jira] [Updated] (CASSANDRA-12717) Fix IllegalArgumentException in CompactionTask
[ https://issues.apache.org/jira/browse/CASSANDRA-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yasuharu Goto updated CASSANDRA-12717: -- Description: When I was ran LargePartitionsTest.test_11_1G at trunk, I found that this test fails due to a java.lang.IllegalArgumentException during compaction and, eventually fails. This exception apparently happens when a compaction generates large sstable. {noformat} DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,074 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,437 ?:? - No segments in reserve; creating a fresh one WARN [CompactionExecutor:14] 2016-09-28 00:32:48,463 ?:? - Writing large partition cql_test_keyspace/table_4:10 (1.004GiB) ERROR [CompactionExecutor:14] 2016-09-28 00:32:49,734 ?:? - Fatal exception in thread Thread[CompactionExecutor:14,1,main] java.lang.IllegalArgumentException: Out of range: 2234434614 at com.google.common.primitives.Ints.checkedCast(Ints.java:91) ~[guava-18.0.jar:na] at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:206) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) ~[main/:na] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:267) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_77] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:49,909 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,148 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,385 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,620 ?:? - No segments in reserve; creating a fresh one {noformat} {noformat} java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Out of range: 2540348821 at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51) at org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:393) at org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:695) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2066) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2061) at org.apache.cassandra.cql3.CQLTester.compact(CQLTester.java:426) at org.apache.cassandra.io.sstable.LargePartitionsTest.lambda$withPartitionSize$2(LargePartitionsTest.java:92) at org.apache.cassandra.io.sstable.LargePartitionsTest.measured(LargePartitionsTest.java:50) at org.apache.cassandra.io.sstable.LargePartitionsTest.withPartitionSize(LargePartitionsTest.java:90) at org.apache.cassandra.io.sstable.LargePartitionsTest.test_11_1G(LargePartitionsTest.java:198) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at com.intellij.junit4.JUnit4TestRunnerUtil$IgnoreIgnoredTestJUnit4ClassRunner.runChild(JUnit4TestRunnerUtil.java:358) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44) at org.junit.runners.ParentRunner.runChildren(ParentR
[jira] [Commented] (CASSANDRA-12717) Fix IllegalArgumentException in CompactionTask
[ https://issues.apache.org/jira/browse/CASSANDRA-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526579#comment-15526579 ] Yasuharu Goto commented on CASSANDRA-12717: --- Patch is here. Could you please review this? Fix IllegalArgumentException in CompactionTask https://github.com/matope/cassandra/commit/a9ccd9731e83fdd4148325c9a727b64e4982e2ba > Fix IllegalArgumentException in CompactionTask > -- > > Key: CASSANDRA-12717 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12717 > Project: Cassandra > Issue Type: Bug >Reporter: Yasuharu Goto >Assignee: Yasuharu Goto > > When I was ran LargePartitionsTest.test_11_1G at trunk, I found that this > test fails due to a java.lang.IllegalArgumentException during compaction > {noformat} > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,074 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,437 ?:? - No segments in > reserve; creating a fresh one > WARN [CompactionExecutor:14] 2016-09-28 00:32:48,463 ?:? - Writing large > partition cql_test_keyspace/table_4:10 (1.004GiB) > ERROR [CompactionExecutor:14] 2016-09-28 00:32:49,734 ?:? - Fatal exception > in thread Thread[CompactionExecutor:14,1,main] > java.lang.IllegalArgumentException: Out of range: 2234434614 > at com.google.common.primitives.Ints.checkedCast(Ints.java:91) > ~[guava-18.0.jar:na] > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:206) > ~[main/:na] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) > ~[main/:na] > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) > ~[main/:na] > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:267) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_77] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_77] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:49,909 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,148 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,385 ?:? - No segments in > reserve; creating a fresh one > DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,620 ?:? - No segments in > reserve; creating a fresh one > {noformat} > and, eventually fails. > {noformat} > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.IllegalArgumentException: Out of range: 2540348821 > at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51) > at > org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:393) > at > org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:695) > at > org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2066) > at > org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2061) > at org.apache.cassandra.cql3.CQLTester.compact(CQLTester.java:426) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.lambda$withPartitionSize$2(LargePartitionsTest.java:92) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.measured(LargePartitionsTest.java:50) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.withPartitionSize(LargePartitionsTest.java:90) > at > org.apache.cassandra.io.sstable.LargePartitionsTest.test_11_1G(LargePartitionsTest.java:198) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.ja
[jira] [Updated] (CASSANDRA-12717) Fix IllegalArgumentException in CompactionTask
[ https://issues.apache.org/jira/browse/CASSANDRA-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yasuharu Goto updated CASSANDRA-12717: -- Description: When I was ran LargePartitionsTest.test_11_1G at trunk, I found that this test fails due to a java.lang.IllegalArgumentException during compaction {noformat} DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,074 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,437 ?:? - No segments in reserve; creating a fresh one WARN [CompactionExecutor:14] 2016-09-28 00:32:48,463 ?:? - Writing large partition cql_test_keyspace/table_4:10 (1.004GiB) ERROR [CompactionExecutor:14] 2016-09-28 00:32:49,734 ?:? - Fatal exception in thread Thread[CompactionExecutor:14,1,main] java.lang.IllegalArgumentException: Out of range: 2234434614 at com.google.common.primitives.Ints.checkedCast(Ints.java:91) ~[guava-18.0.jar:na] at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:206) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) ~[main/:na] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:267) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_77] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:49,909 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,148 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,385 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,620 ?:? - No segments in reserve; creating a fresh one {noformat} and, eventually fails. {noformat} java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Out of range: 2540348821 at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51) at org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:393) at org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:695) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2066) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2061) at org.apache.cassandra.cql3.CQLTester.compact(CQLTester.java:426) at org.apache.cassandra.io.sstable.LargePartitionsTest.lambda$withPartitionSize$2(LargePartitionsTest.java:92) at org.apache.cassandra.io.sstable.LargePartitionsTest.measured(LargePartitionsTest.java:50) at org.apache.cassandra.io.sstable.LargePartitionsTest.withPartitionSize(LargePartitionsTest.java:90) at org.apache.cassandra.io.sstable.LargePartitionsTest.test_11_1G(LargePartitionsTest.java:198) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at com.intellij.junit4.JUnit4TestRunnerUtil$IgnoreIgnoredTestJUnit4ClassRunner.runChild(JUnit4TestRunnerUtil.java:358) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRu
[jira] [Comment Edited] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15525125#comment-15525125 ] Jeff Jirsa edited comment on CASSANDRA-12700 at 9/27/16 4:01 PM: - [~rajesh_con] - As a workaround, try executing (via cqlsh): {code} > CONSISTENCY ALL; > UPDATE system_auth.roles SET is_superuser=True where role='cassandra_test'; {code} That should work around the NPE until it's fixed (the NPE is coming from a missing is_superuser field). was (Author: jjirsa): [~rajesh_con] - As a workaround, try executing (via cqlsh): {code} > CONSISTENCY ALL; > UPDATE system_auth SET is_superuser=True where role='cassandra_test'; {code} That should work around the NPE until it's fixed (the NPE is coming from a missing is_superuser field). > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan >Assignee: Jeff Jirsa > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_73] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.con
[jira] [Created] (CASSANDRA-12717) Fix IllegalArgumentException in CompactionTask
Yasuharu Goto created CASSANDRA-12717: - Summary: Fix IllegalArgumentException in CompactionTask Key: CASSANDRA-12717 URL: https://issues.apache.org/jira/browse/CASSANDRA-12717 Project: Cassandra Issue Type: Bug Reporter: Yasuharu Goto Assignee: Yasuharu Goto When I was ran LargePartitionsTest.test_11_1G, I found that this test fails due to a java.lang.IllegalArgumentException during compaction {noformat} DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,074 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:48,437 ?:? - No segments in reserve; creating a fresh one WARN [CompactionExecutor:14] 2016-09-28 00:32:48,463 ?:? - Writing large partition cql_test_keyspace/table_4:10 (1.004GiB) ERROR [CompactionExecutor:14] 2016-09-28 00:32:49,734 ?:? - Fatal exception in thread Thread[CompactionExecutor:14,1,main] java.lang.IllegalArgumentException: Out of range: 2234434614 at com.google.common.primitives.Ints.checkedCast(Ints.java:91) ~[guava-18.0.jar:na] at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:206) ~[main/:na] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:85) ~[main/:na] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:61) ~[main/:na] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:267) ~[main/:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_77] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_77] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:49,909 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,148 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,385 ?:? - No segments in reserve; creating a fresh one DEBUG [COMMIT-LOG-ALLOCATOR] 2016-09-28 00:32:50,620 ?:? - No segments in reserve; creating a fresh one {noformat} and, eventually fails. {noformat} java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: Out of range: 2540348821 at org.apache.cassandra.utils.Throwables.maybeFail(Throwables.java:51) at org.apache.cassandra.utils.FBUtilities.waitOnFutures(FBUtilities.java:393) at org.apache.cassandra.db.compaction.CompactionManager.performMaximal(CompactionManager.java:695) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2066) at org.apache.cassandra.db.ColumnFamilyStore.forceMajorCompaction(ColumnFamilyStore.java:2061) at org.apache.cassandra.cql3.CQLTester.compact(CQLTester.java:426) at org.apache.cassandra.io.sstable.LargePartitionsTest.lambda$withPartitionSize$2(LargePartitionsTest.java:92) at org.apache.cassandra.io.sstable.LargePartitionsTest.measured(LargePartitionsTest.java:50) at org.apache.cassandra.io.sstable.LargePartitionsTest.withPartitionSize(LargePartitionsTest.java:90) at org.apache.cassandra.io.sstable.LargePartitionsTest.test_11_1G(LargePartitionsTest.java:198) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at com.intellij.junit4.JUnit4TestRunnerUtil$IgnoreIgnoredTestJUnit4ClassRunner.runChild(JUnit4TestRunnerUtil.java:358) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4
[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526531#comment-15526531 ] Rajesh Radhakrishnan commented on CASSANDRA-12700: -- Yes I assumed and did executed those CQL too. So far no more NPE. I will keep an eye on the system.log > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan >Assignee: Jeff Jirsa > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_73] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.7.0.jar:3.7.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] > ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - > Unexpected exception during request; channel = [id: 0x8e2eae00, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar
[jira] [Commented] (CASSANDRA-12443) Remove alter type support
[ https://issues.apache.org/jira/browse/CASSANDRA-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526479#comment-15526479 ] Carl Yeksigian commented on CASSANDRA-12443: I've pushed a branch for 3.0 (not sure where this should go, but it is a simple enough patch to roll it forward to trunk). This disallows the changing of types; I've changed the things that will help CASSANDRA-10309. Only worry with this solution is that someone could execute an alter statement on a lower-version cluster machine and change the schema in the table and on some cluster machines. However, since we say not to touch the schema while operating in a mixed-version cluster, this shouldn't happen. [branch|https://github.com/carlyeks/cassandra/tree/ticket/12443/3.0] [utest|http://cassci.datastax.com/job/carlyeks-ticket-12443-3.0-testall/] [dtest|http://cassci.datastax.com/job/carlyeks-ticket-12443-3.0-dtest/] > Remove alter type support > - > > Key: CASSANDRA-12443 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12443 > Project: Cassandra > Issue Type: Improvement >Reporter: Carl Yeksigian >Assignee: Carl Yeksigian > Fix For: 4.x > > > Currently, we allow altering of types. However, because we no longer store > the length for all types anymore, switching from a fixed-width to > variable-width type causes issues. commitlog playback breaking startup, > queries currently in flight getting back bad results, and special casing > required to handle the changes. In addition, this would solve > CASSANDRA-10309, as there is no possibility of the types changing while an > SSTableReader is open. > For fixed-length, compatible types, the alter also doesn't add much over a > cast, so users could use that in order to retrieve the altered type. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12700) During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes Connection get lost, because of Server NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-12700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526470#comment-15526470 ] Jeff Jirsa commented on CASSANDRA-12700: Yes, sorry, system_auth.roles > During writing data into Cassandra 3.7.0 using Python driver 3.7 sometimes > Connection get lost, because of Server NullPointerException > -- > > Key: CASSANDRA-12700 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12700 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra cluster with two nodes running C* version > 3.7.0 and Python Driver 3.7 using Python 2.7.11. > OS: Red Hat Enterprise Linux 6.x x64, > RAM :8GB > DISK :210GB > Cores: 2 > Java 1.8.0_73 JRE >Reporter: Rajesh Radhakrishnan >Assignee: Jeff Jirsa > Fix For: 3.x > > > In our C* cluster we are using the latest Cassandra 3.7.0 (datastax-ddc.3.70) > with Python driver 3.7. Trying to insert 2 million row or more data into the > database, but sometimes we are getting "Null pointer Exception". > We are using Python 2.7.11 and Java 1.8.0_73 in the Cassandra nodes and in > the client its Python 2.7.12. > {code:title=cassandra server log} > ERROR [SharedPool-Worker-6] 2016-09-23 09:42:55,002 Message.java:611 - > Unexpected exception during request; channel = [id: 0xc208da86, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58418] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:24) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:113) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getBoolean(UntypedResultSet.java:273) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:85) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager$1.apply(CassandraRoleManager.java:81) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:503) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:485) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:298) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.service.ClientState.login(ClientState.java:227) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:79) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:507) > [apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:401) > [apache-cassandra-3.7.0.jar:3.7.0] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:292) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:32) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:283) > [netty-all-4.0.36.Final.jar:4.0.36.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_73] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.7.0.jar:3.7.0] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.7.0.jar:3.7.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73] > ERROR [SharedPool-Worker-1] 2016-09-23 09:42:56,238 Message.java:611 - > Unexpected exception during request; channel = [id: 0x8e2eae00, > L:/IP1.IP2.IP3.IP4:9042 - R:/IP5.IP6.IP7.IP8:58421] > java.lang.NullPointerException: null > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSerializer.java:33) > ~[apache-cassandra-3.7.0.jar:3.7.0] > at > org.apache.cassandra.serializers.BooleanSerializer.deserialize(BooleanSer
[jira] [Updated] (CASSANDRA-12699) Excessive use of "hidden" Linux page table memory
[ https://issues.apache.org/jira/browse/CASSANDRA-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Heiko Sommer updated CASSANDRA-12699: - Environment: Cassandra 2.2.7 on Red Hat 6.7, kernel 2.6.32-573.18.1.el6.x86_64, with Java 1.8.0_73. Probably others. (was: Cassandra 2.2.7 on Red Hat 6.7, with Java 1.8.0_73. Probably others. ) > Excessive use of "hidden" Linux page table memory > - > > Key: CASSANDRA-12699 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12699 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.2.7 on Red Hat 6.7, kernel > 2.6.32-573.18.1.el6.x86_64, with Java 1.8.0_73. Probably others. >Reporter: Heiko Sommer > Attachments: PageTableMemoryExample.png, cassandra-env.sh, > cassandra.yaml, cassandraMemoryLog.sh, cassandraMemoryLog.sh > > > free > The cassandra JVM process uses many gigabytes of page table memory during > certain activities, which can lead to oom-killer action with > "java.lang.OutOfMemoryError: null" logs. > Page table memory is not reported by Linux tools such as "top" or "ps" and > therefore might be responsible also for other spurious Cassandra issues with > "memory eating" or crashes, e.g. CASSANDRA-8723. > The problem happens especially (or only?) during large compactions and > anticompactions. > Eventually all memory gets released, which means there is no real leak. Still > I suspect that the memory mappings that fill the page table could be released > much sooner, to keep the page table size at a small fraction of the total > Cassandra process memory. > How to reproduce: Record the memory use on a Cassandra node, including page > table memory, for example using the attached script cassandraMemoryLog.sh. > Even when there is no crash, the ramping up and sudden release of page table > memory is visible. > A stacked area plot for the memory on one of our crashed nodes is attached > (PageTableMemoryExample.png). The page table memory used by Cassandra is > shown in red ("VmPTE"). > (In the plot we also see that the sum of measured memory portions sometimes > exceeds the total memory. This is probably an issue of how RSS memory is > measured, perhaps including some buffers/cache memory that also counts toward > available memory. It does not invalidate the finding that page table memory > is growing to enormous sizes.) > Shortly before the crash, /proc/$PID/status reported > VmPeak: 6989760944 kB > VmSize: 5742400572 kB > VmLck: 4735036 kB > VmHWM: 8589972 kB > VmRSS: 7022036 kB > VmData: 10019732 kB > VmStk:92 kB > VmExe: 4 kB > VmLib: 17584 kB > VmPTE: 3965856 kB > VmSwap:0 kB > The files cassandra.yaml and cassandra-env.sh used on the node where the data > was taken are attached. > Please let me know if I should provide any other data or descriptions to help > with this ticket. > Known workarounds: Use more RAM, or limit the amount of Java heap memory. In > the above crash, MAX_HEAP_SIZE was not set, so that the default heap size for > 12 GB RAM was used (-Xms2976M, -Xmx2976M). > We have not tried yet if variations of heap vs. offheap config choices make a > difference. > Perhaps there are other workarounds using -XX+UseLargePages or related Linux > settings to reduce the size of the process page table? > I believe that we see these crashes more often than other projects because we > have a test system with not much RAM but with a lot of data (compressed ~3 TB > per node), while the CPUs are slow so that anti-/compactions overlap a lot. > Ideally Cassandra (native) code should be changed to release memory in > smaller chunks, so that page table size cannot cause an otherwise stable > system to crash. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12443) Remove alter type support
[ https://issues.apache.org/jira/browse/CASSANDRA-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Yeksigian reassigned CASSANDRA-12443: -- Assignee: Carl Yeksigian > Remove alter type support > - > > Key: CASSANDRA-12443 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12443 > Project: Cassandra > Issue Type: Improvement >Reporter: Carl Yeksigian >Assignee: Carl Yeksigian > Fix For: 4.x > > > Currently, we allow altering of types. However, because we no longer store > the length for all types anymore, switching from a fixed-width to > variable-width type causes issues. commitlog playback breaking startup, > queries currently in flight getting back bad results, and special casing > required to handle the changes. In addition, this would solve > CASSANDRA-10309, as there is no possibility of the types changing while an > SSTableReader is open. > For fixed-length, compatible types, the alter also doesn't add much over a > cast, so users could use that in order to retrieve the altered type. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12699) Excessive use of "hidden" Linux page table memory
[ https://issues.apache.org/jira/browse/CASSANDRA-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526378#comment-15526378 ] Heiko Sommer commented on CASSANDRA-12699: -- Now I realize that there is still the {{disk_access_mode}} option available, even though it does not appear in my default version of cassandra.yaml. I will try out sequential read/write instead of mmap for accessing sstable files, just to see if this fixes the PTE or other memory issues. Please advise me if this is not a straightforward change. > Excessive use of "hidden" Linux page table memory > - > > Key: CASSANDRA-12699 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12699 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 2.2.7 on Red Hat 6.7, with Java 1.8.0_73. > Probably others. >Reporter: Heiko Sommer > Attachments: PageTableMemoryExample.png, cassandra-env.sh, > cassandra.yaml, cassandraMemoryLog.sh, cassandraMemoryLog.sh > > > free > The cassandra JVM process uses many gigabytes of page table memory during > certain activities, which can lead to oom-killer action with > "java.lang.OutOfMemoryError: null" logs. > Page table memory is not reported by Linux tools such as "top" or "ps" and > therefore might be responsible also for other spurious Cassandra issues with > "memory eating" or crashes, e.g. CASSANDRA-8723. > The problem happens especially (or only?) during large compactions and > anticompactions. > Eventually all memory gets released, which means there is no real leak. Still > I suspect that the memory mappings that fill the page table could be released > much sooner, to keep the page table size at a small fraction of the total > Cassandra process memory. > How to reproduce: Record the memory use on a Cassandra node, including page > table memory, for example using the attached script cassandraMemoryLog.sh. > Even when there is no crash, the ramping up and sudden release of page table > memory is visible. > A stacked area plot for the memory on one of our crashed nodes is attached > (PageTableMemoryExample.png). The page table memory used by Cassandra is > shown in red ("VmPTE"). > (In the plot we also see that the sum of measured memory portions sometimes > exceeds the total memory. This is probably an issue of how RSS memory is > measured, perhaps including some buffers/cache memory that also counts toward > available memory. It does not invalidate the finding that page table memory > is growing to enormous sizes.) > Shortly before the crash, /proc/$PID/status reported > VmPeak: 6989760944 kB > VmSize: 5742400572 kB > VmLck: 4735036 kB > VmHWM: 8589972 kB > VmRSS: 7022036 kB > VmData: 10019732 kB > VmStk:92 kB > VmExe: 4 kB > VmLib: 17584 kB > VmPTE: 3965856 kB > VmSwap:0 kB > The files cassandra.yaml and cassandra-env.sh used on the node where the data > was taken are attached. > Please let me know if I should provide any other data or descriptions to help > with this ticket. > Known workarounds: Use more RAM, or limit the amount of Java heap memory. In > the above crash, MAX_HEAP_SIZE was not set, so that the default heap size for > 12 GB RAM was used (-Xms2976M, -Xmx2976M). > We have not tried yet if variations of heap vs. offheap config choices make a > difference. > Perhaps there are other workarounds using -XX+UseLargePages or related Linux > settings to reduce the size of the process page table? > I believe that we see these crashes more often than other projects because we > have a test system with not much RAM but with a lot of data (compressed ~3 TB > per node), while the CPUs are slow so that anti-/compactions overlap a lot. > Ideally Cassandra (native) code should be changed to release memory in > smaller chunks, so that page table size cannot cause an otherwise stable > system to crash. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10540) RangeAwareCompaction
[ https://issues.apache.org/jira/browse/CASSANDRA-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526390#comment-15526390 ] Carl Yeksigian commented on CASSANDRA-10540: I'm +1 on the code here, I'm just waiting on some more testing from [~philipthompson]. Thanks for the ping [~jjirsa]. > RangeAwareCompaction > > > Key: CASSANDRA-10540 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10540 > Project: Cassandra > Issue Type: New Feature >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Labels: compaction, lcs, vnodes > Fix For: 3.x > > > Broken out from CASSANDRA-6696, we should split sstables based on ranges > during compaction. > Requirements; > * dont create tiny sstables - keep them bunched together until a single vnode > is big enough (configurable how big that is) > * make it possible to run existing compaction strategies on the per-range > sstables > We should probably add a global compaction strategy parameter that states > whether this should be enabled or not. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12401) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.multi_list_set_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526253#comment-15526253 ] Philip Thompson commented on CASSANDRA-12401: - I would appreciate if you did, thank you. > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.multi_list_set_test > > > Key: CASSANDRA-12401 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12401 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Benjamin Lerer > Labels: dtest > > example failure: > http://cassci.datastax.com/job/trunk_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/multi_list_set_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 2289, in multi_list_set_test > assert_one(cursor, "SELECT l1, l2 FROM test WHERE k = 0", [[1, 24, 3], > [4, 42, 6]]) > File "/home/automaton/cassandra-dtest/assertions.py", line 124, in > assert_one > assert list_res == [expected], "Expected {} from {}, but got > {}".format([expected], query, list_res) > "Expected [[[1, 24, 3], [4, 42, 6]]] from SELECT l1, l2 FROM test WHERE k = > 0, but got [[[1, 24, 3, 1, 2, 3], [4, 42, 6, 4, 5, 6]]] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12401) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.multi_list_set_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526237#comment-15526237 ] Benjamin Lerer commented on CASSANDRA-12401: Do want me to remove the {{known_failure}} annotations or will you do it? > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.multi_list_set_test > > > Key: CASSANDRA-12401 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12401 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Benjamin Lerer > Labels: dtest > > example failure: > http://cassci.datastax.com/job/trunk_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/multi_list_set_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 2289, in multi_list_set_test > assert_one(cursor, "SELECT l1, l2 FROM test WHERE k = 0", [[1, 24, 3], > [4, 42, 6]]) > File "/home/automaton/cassandra-dtest/assertions.py", line 124, in > assert_one > assert list_res == [expected], "Expected {} from {}, but got > {}".format([expected], query, list_res) > "Expected [[[1, 24, 3], [4, 42, 6]]] from SELECT l1, l2 FROM test WHERE k = > 0, but got [[[1, 24, 3, 1, 2, 3], [4, 42, 6, 4, 5, 6]]] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12401) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.multi_list_set_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526211#comment-15526211 ] Philip Thompson commented on CASSANDRA-12401: - I suppose so > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x.multi_list_set_test > > > Key: CASSANDRA-12401 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12401 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Benjamin Lerer > Labels: dtest > > example failure: > http://cassci.datastax.com/job/trunk_dtest_upgrade/17/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_0_x_To_indev_3_x/multi_list_set_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 2289, in multi_list_set_test > assert_one(cursor, "SELECT l1, l2 FROM test WHERE k = 0", [[1, 24, 3], > [4, 42, 6]]) > File "/home/automaton/cassandra-dtest/assertions.py", line 124, in > assert_one > assert list_res == [expected], "Expected {} from {}, but got > {}".format([expected], query, list_res) > "Expected [[[1, 24, 3], [4, 42, 6]]] from SELECT l1, l2 FROM test WHERE k = > 0, but got [[[1, 24, 3, 1, 2, 3], [4, 42, 6, 4, 5, 6]]] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12620) Resurrected empty rows on update to 3.x
[ https://issues.apache.org/jira/browse/CASSANDRA-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526192#comment-15526192 ] Michael Guissine commented on CASSANDRA-12620: -- Thank you [~blerer] Fist of all we had 3-node DSE cluster. The upgrade procedure was: 1. Disable OpsCenter repair service and Spark job-server and workers 2. On the node being upgraded a. Drain the node using nodetool b. Stop DSE c. Upgrade to 3.0 d. Start DSE e. upgrade SSTables using nodetool 3. Repeat for all nodes in the cluster Note that upgradesstables stack a few times and we had to re-run it. More than one table was affected and many rows in those tables. > Resurrected empty rows on update to 3.x > --- > > Key: CASSANDRA-12620 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12620 > Project: Cassandra > Issue Type: Bug >Reporter: Collin Sauve >Assignee: Benjamin Lerer > > We had the below table on C* 2.x (dse 4.8.4, we assume was 2.1.15.1423 > according to documentation), and were entering TTLs at write-time using the > DataStax C# Driver (using the POCO mapper). > Upon upgrade to 3.0.8.1293 (DSE 5.0.2), we are seeing a lot of rows that: > * should have been TTL'd > * have no non-primary-key column data > {code} > CREATE TABLE applicationservices.aggregate_bucket_event_v3 ( > bucket_type int, > bucket_id text, > date timestamp, > aggregate_id text, > event_type int, > event_id text, > entities list>>, > identity_sid text, > PRIMARY KEY ((bucket_type, bucket_id), date, aggregate_id, event_type, > event_id) > ) WITH CLUSTERING ORDER BY (date DESC, aggregate_id ASC, event_type ASC, > event_id ASC) > AND bloom_filter_fp_chance = 0.1 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > {code} > { > "partition" : { > "key" : [ "0", "26492" ], > "position" : 54397932 > }, > "rows" : [ > { > "type" : "row", > "position" : 54397961, > "clustering" : [ "2016-09-07 23:33Z", "3651664", "0", > "773665449947099136" ], > "liveness_info" : { "tstamp" : "2016-09-07T23:34:09.758Z", "ttl" : > 172741, "expires_at" : "2016-09-09T23:33:10Z", "expired" : false }, > "cells" : [ > { "name" : "identity_sid", "value" : "p_tw_zahidana" }, > { "name" : "entities", "deletion_info" : { "marked_deleted" : > "2016-09-07T23:34:09.757999Z", "local_delete_time" : "2016-09-07T23:34:09Z" } > }, > { "name" : "entities", "path" : [ > "936e17e1-7553-11e6-9b92-29a33b5827c3" ], "value" : > "0:https\\://www.youtube.com/watch?v=pwAJAssv6As" }, > { "name" : "entities", "path" : [ > "936e17e2-7553-11e6-9b92-29a33b5827c3" ], "value" : "2:youtube" } > ] > }, > { > "type" : "row", >}, > { > "type" : "row", > "position" : 54397177, > "clustering" : [ "2016-08-17 10:00Z", "6387376", "0", > "765850666296225792" ], > "liveness_info" : { "tstamp" : "2016-08-17T11:26:15.917001Z" }, > "cells" : [ ] > }, > { > "type" : "row", > "position" : 54397227, > "clustering" : [ "2016-08-17 07:00Z", "6387376", "0", > "765805367347601409" ], > "liveness_info" : { "tstamp" : "2016-08-17T08:11:17.587Z" }, > "cells" : [ ] > }, > { > "type" : "row", > "position" : 54397276, > "clustering" : [ "2016-08-17 04:00Z", "6387376", "0", > "765760069858365441" ], > "liveness_info" : { "tstamp" : "2016-08-17T05:58:11.228Z" }, > "cells" : [ ] > }, > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11138) cassandra-stress tool - clustering key values not distributed
[ https://issues.apache.org/jira/browse/CASSANDRA-11138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526118#comment-15526118 ] Alan Boudreault commented on CASSANDRA-11138: - I confirm that the bug I mentioned in CASSANDRA-12490 is fixed with this patch. > cassandra-stress tool - clustering key values not distributed > - > > Key: CASSANDRA-11138 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11138 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Cassandra 2.2.4, Centos 6.5, Java 8 >Reporter: Ralf Steppacher > Labels: stress > Attachments: 11138-trunk.patch > > > I am trying to get the stress tool to generate random values for three > clustering keys. I am trying to simulate collecting events per user id (text, > partition key). Events have a session type (text), event type (text), and > creation time (timestamp) (clustering keys, in that order). For testing > purposes I ended up with the following column spec: > {noformat} > columnspec: > - name: created_at > cluster: uniform(10..10) > - name: event_type > size: uniform(5..10) > population: uniform(1..30) > cluster: uniform(1..30) > - name: session_type > size: fixed(5) > population: uniform(1..4) > cluster: uniform(1..4) > - name: user_id > size: fixed(15) > population: uniform(1..100) > - name: message > size: uniform(10..100) > population: uniform(1..100B) > {noformat} > My expectation was that this would lead to anywhere between 10 and 1200 rows > to be created per partition key. But it seems that exactly 10 rows are being > created, with the {{created_at}} timestamp being the only variable that is > assigned variable values (per partition key). The {{session_type}} and > {{event_type}} variables are assigned fixed values. This is even the case if > I set the cluster distribution to uniform(30..30) and uniform(4..4) > respectively. With this setting I expected 1200 rows per partition key to be > created, as announced when running the stress tool, but it is still 10. > {noformat} > [rsteppac@centos bin]$ ./cassandra-stress user > profile=../batch_too_large.yaml ops\(insert=1\) -log level=verbose > file=~/centos_eventy_patient_session_event_timestamp_insert_only.log -node > 10.211.55.8 > … > Created schema. Sleeping 1s for propagation. > Generating batches with [1..1] partitions and [1..1] rows (of [1200..1200] > total rows in the partitions) > Improvement over 4 threadCount: 19% > ... > {noformat} > Sample of generated data: > {noformat} > cqlsh> select user_id, event_type, session_type, created_at from > stresscql.batch_too_large LIMIT 30 ; > user_id | event_type | session_type | created_at > -+--+--+-- > %\x7f\x03/.d29 08:14:11+ > %\x7f\x03/.d29 04:04:56+ > %\x7f\x03/.d29 00:39:23+ > %\x7f\x03/.d29 19:56:30+ > %\x7f\x03/.d29 20:46:26+ > %\x7f\x03/.d29 03:27:17+ > %\x7f\x03/.d29 23:30:34+ > %\x7f\x03/.d29 02:41:28+ > %\x7f\x03/.d29 07:23:48+ > %\x7f\x03/.d29 23:23:04+ > N!\x0eUA7^r7d\x06J 17:48:51+ > N!\x0eUA7^r7d\x06J 06:21:13+ > N!\x0eUA7^r7d\x06J 03:34:41+ > N!\x0eUA7^r7d\x06J 05:26:21+ > N!\x0eUA7^r7d\x06J 01:31:24+ > N!\x0eUA7^r7d\x06J 14:22:43+ > N!\x0eUA7^r7d\x06J 14:54:29+ > N!\x0eUA7^r7d\x06J 13:31:54+ > N!\x0eUA7^r7d\x06J 06:38:40+ > N!\x0eUA7^r7d\x06J 21:16:47+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 2014-11-23 > 17:05:45+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 2012-02-23 > 23:20:54+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 2012-02-19 > 12:05:15+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 2005-10-17 > 04:22:45+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 2003-02-24 > 19:45:06+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 1996-12-18 > 06:18:31+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 1991-06-10 > 22:07:45+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 1983-05-05 > 12:29:09+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 1972-04-17 > 21:24:52+ > oy\x1c0077H"i\x07\x13_%\x06 || \nz@Qj\x1cB |E}P^k | 1971-05-09 > 23:00:02+ > (30 rows) > cqlsh> > {noformat} > If I remove the {{created_at}} clustering key, then the other two clustering > keys are being assigned variable values per partition key. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12697) cdc column addition still breaks schema migration tasks
[ https://issues.apache.org/jira/browse/CASSANDRA-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-12697: -- Status: Ready to Commit (was: Patch Available) > cdc column addition still breaks schema migration tasks > --- > > Key: CASSANDRA-12697 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12697 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.x > > > This is a followup of CASSANDRA-12236, which basically didn't fully fix the > problem. Namely, the fix from CASSANDRA-12236 skipped the {{cdc}} column in > {{SchemaKeyspace.addTableParamsToRowBuilder()}}, but that method isn't used > by schema "migration tasks" ({{MigrationRequestVerbHandler}}, which instead > directly send the content of the full schema table it reads from disk. So we > still end up with a RTE like CASSANDRA-12236. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12697) cdc column addition still breaks schema migration tasks
[ https://issues.apache.org/jira/browse/CASSANDRA-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15526099#comment-15526099 ] Aleksey Yeschenko commented on CASSANDRA-12697: --- LGTM > cdc column addition still breaks schema migration tasks > --- > > Key: CASSANDRA-12697 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12697 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.x > > > This is a followup of CASSANDRA-12236, which basically didn't fully fix the > problem. Namely, the fix from CASSANDRA-12236 skipped the {{cdc}} column in > {{SchemaKeyspace.addTableParamsToRowBuilder()}}, but that method isn't used > by schema "migration tasks" ({{MigrationRequestVerbHandler}}, which instead > directly send the content of the full schema table it reads from disk. So we > still end up with a RTE like CASSANDRA-12236. -- This message was sent by Atlassian JIRA (v6.3.4#6332)