[jira] [Created] (CASSANDRA-11656) sstabledump has inconsistency in deletion_time printout
Wei Deng created CASSANDRA-11656: Summary: sstabledump has inconsistency in deletion_time printout Key: CASSANDRA-11656 URL: https://issues.apache.org/jira/browse/CASSANDRA-11656 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Wei Deng See the following output: {noformat} [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461646542601774 }, "cells" : [ { "name" : "val0_int", "deletion_time" : 1461647421, "tstamp" : 1461647421344759 }, { "name" : "val1_set_of_int", "path" : [ "1" ], "deletion_time" : 1461647320, "tstamp" : 1461647320160261 }, { "name" : "val1_set_of_int", "path" : [ "10" ], "value" : "", "tstamp" : 1461647295880444 }, { "name" : "val1_set_of_int", "path" : [ "11" ], "value" : "", "tstamp" : 1461647295880444 }, { "name" : "val1_set_of_int", "path" : [ "12" ], "value" : "", "tstamp" : 1461647295880444 } ] }, { "type" : "row", "position" : 85, "clustering" : [ "c2" ], "deletion_info" : { "deletion_time" : 1461647588089843, "tstamp" : 1461647588 }, "cells" : [ ] } ] } ] {noformat} To avoid confusion, we need to have consistency in printing out the DeletionTime object. By definition, markedForDeleteAt is in microseconds since epoch and marks the time when the "delete" mutation happens; localDeletionTime is in seconds since epoch and allows GC to collect the tombstone if the current epoch second is greater than localDeletionTime + gc_grace_seconds. I'm ok to use "tstamp" to represent markedForDeleteAt because markedForDeleteAt does represent this delete mutation's timestamp, but we need to be consistent everywhere. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11646) SSTableWriter output discrepancy
[ https://issues.apache.org/jira/browse/CASSANDRA-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257575#comment-15257575 ] Stefania commented on CASSANDRA-11646: -- I've started looking at this so I'll carry on. The problem is that since CASSANDRA-10624 we are using {{TypeCodec}} to serialize rather than {{TypeSerializer}}. > SSTableWriter output discrepancy > > > Key: CASSANDRA-11646 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11646 > Project: Cassandra > Issue Type: Bug >Reporter: T Jake Luciani >Assignee: Stefania > Fix For: 3.6 > > > Since CASSANDRA-10624 there is a non-trivial difference in the size of the > output in CQLSSTableWriter. > I've written the following code: > {code} > String KS = "cql_keyspace"; > String TABLE = "table1"; > File tempdir = Files.createTempDir(); > File dataDir = new File(tempdir.getAbsolutePath() + File.separator + > KS + File.separator + TABLE); > assert dataDir.mkdirs(); > String schema = "CREATE TABLE cql_keyspace.table1 (" > + " k int PRIMARY KEY," > + " v1 text," > + " v2 int" > + ");";// with compression = {};"; > String insert = "INSERT INTO cql_keyspace.table1 (k, v1, v2) VALUES > (?, ?, ?)"; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .sorted() > .inDirectory(dataDir) > .forTable(schema) > .using(insert).build(); > for (int i = 0; i < 1000; i++) > writer.addRow(i, "test1", 24); > writer.close(); > {code} > Pre CASSANDRA-10624 the data file is ~63MB. Post it's ~69MB -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11646) SSTableWriter output discrepancy
[ https://issues.apache.org/jira/browse/CASSANDRA-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-11646: - Reviewer: Alex Petrov > SSTableWriter output discrepancy > > > Key: CASSANDRA-11646 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11646 > Project: Cassandra > Issue Type: Bug >Reporter: T Jake Luciani >Assignee: Stefania > Fix For: 3.6 > > > Since CASSANDRA-10624 there is a non-trivial difference in the size of the > output in CQLSSTableWriter. > I've written the following code: > {code} > String KS = "cql_keyspace"; > String TABLE = "table1"; > File tempdir = Files.createTempDir(); > File dataDir = new File(tempdir.getAbsolutePath() + File.separator + > KS + File.separator + TABLE); > assert dataDir.mkdirs(); > String schema = "CREATE TABLE cql_keyspace.table1 (" > + " k int PRIMARY KEY," > + " v1 text," > + " v2 int" > + ");";// with compression = {};"; > String insert = "INSERT INTO cql_keyspace.table1 (k, v1, v2) VALUES > (?, ?, ?)"; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .sorted() > .inDirectory(dataDir) > .forTable(schema) > .using(insert).build(); > for (int i = 0; i < 1000; i++) > writer.addRow(i, "test1", 24); > writer.close(); > {code} > Pre CASSANDRA-10624 the data file is ~63MB. Post it's ~69MB -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11646) SSTableWriter output discrepancy
[ https://issues.apache.org/jira/browse/CASSANDRA-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania reassigned CASSANDRA-11646: Assignee: Stefania (was: Alex Petrov) > SSTableWriter output discrepancy > > > Key: CASSANDRA-11646 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11646 > Project: Cassandra > Issue Type: Bug >Reporter: T Jake Luciani >Assignee: Stefania > Fix For: 3.6 > > > Since CASSANDRA-10624 there is a non-trivial difference in the size of the > output in CQLSSTableWriter. > I've written the following code: > {code} > String KS = "cql_keyspace"; > String TABLE = "table1"; > File tempdir = Files.createTempDir(); > File dataDir = new File(tempdir.getAbsolutePath() + File.separator + > KS + File.separator + TABLE); > assert dataDir.mkdirs(); > String schema = "CREATE TABLE cql_keyspace.table1 (" > + " k int PRIMARY KEY," > + " v1 text," > + " v2 int" > + ");";// with compression = {};"; > String insert = "INSERT INTO cql_keyspace.table1 (k, v1, v2) VALUES > (?, ?, ?)"; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .sorted() > .inDirectory(dataDir) > .forTable(schema) > .using(insert).build(); > for (int i = 0; i < 1000; i++) > writer.addRow(i, "test1", 24); > writer.close(); > {code} > Pre CASSANDRA-10624 the data file is ~63MB. Post it's ~69MB -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11655) sstabledump doesn't print out tombstone information for deleted collection column
[ https://issues.apache.org/jira/browse/CASSANDRA-11655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-11655: - Description: Pretty trivial to reproduce. {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, val1_set_of_int set, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh echo "select * from testks.testcf;" | cqlsh nodetool flush testks testcf {noformat} Now if you run sstabledump (even after taking the [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for CASSANDRA-11654) against the newly generated SSTable like the following: {noformat} ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461645231352208 }, "cells" : [ { "name" : "val0_int", "value" : "100" } ] } ] } ] {noformat} You will see that the collection-level Deletion Info is nowhere to be found, so you will not be able to know "markedForDeleteAt" or "localDeletionTime" for this collection tombstone. was: Pretty trivial to reproduce. {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh echo "select * from testks.testcf;" | cqlsh nodetool flush testks testcf {noformat} Now if you run sstabledump (even after taking the [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for CASSANDRA-11654) against the newly generated SSTable like the following: {noformat} ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461645231352208 }, "cells" : [ { "name" : "val0_int", "value" : "100" } ] } ] } ] {noformat} You will see that the collection-level Deletion Info is nowhere to be found, so you will not be able to know "markedForDeleteAt" or "localDeletionTime" for this collection tombstone. > sstabledump doesn't print out tombstone information for deleted collection > column > - > > Key: CASSANDRA-11655 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11655 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng > > Pretty trivial to reproduce. > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_set_of_int set, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, > 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh > echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh > echo "select * from testks.testcf;" | cqlsh > nodetool flush testks testcf > {noformat} > Now if you run sstabledump (even after taking the > [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for > CASSANDRA-11654) against the newly generated SSTable like the following: > {noformat} > ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461645231352208 }, > "cells" : [ > { "name" : "val0_int", "value" : "100" } > ] > } > ] > } > ] > {noformat} > You will see that the collection-level Deletion Info is nowhere to be found, > so you will not be able to know "markedForDeleteAt" or "localDeletionTime" > for this collection tombstone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11655) sstabledump doesn't print out tombstone information for deleted collection column
Wei Deng created CASSANDRA-11655: Summary: sstabledump doesn't print out tombstone information for deleted collection column Key: CASSANDRA-11655 URL: https://issues.apache.org/jira/browse/CASSANDRA-11655 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Wei Deng Pretty trivial to reproduce. {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int, val1_set_of_int) VALUES (1, 'c1', 100, {1, 2, 3, 4, 5});" | cqlsh echo "delete val1_set_of_int from testks.testcf where k=1 and c='c1';" | cqlsh echo "select * from testks.testcf;" | cqlsh nodetool flush testks testcf {noformat} Now if you run sstabledump (even after taking the [patch|https://github.com/yukim/cassandra/tree/11654-3.0] for CASSANDRA-11654) against the newly generated SSTable like the following: {noformat} ~/cassandra-trunk/tools/bin/sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461645231352208 }, "cells" : [ { "name" : "val0_int", "value" : "100" } ] } ] } ] {noformat} You will see that the collection-level Deletion Info is nowhere to be found, so you will not be able to know "markedForDeleteAt" or "localDeletionTime" for this collection tombstone. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257534#comment-15257534 ] Wei Deng commented on CASSANDRA-11654: -- +1 looks good with Yuki's patch. > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Yuki Morishita > Labels: Tools > Fix For: 3.0.x, 3.x > > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11631) cqlsh COPY FROM fails for null values with non-prepared statements
[ https://issues.apache.org/jira/browse/CASSANDRA-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257524#comment-15257524 ] Stefania commented on CASSANDRA-11631: -- Two dtest failed on all non-trunk branches, fixed in the test code and restarted CI. Also fixed a pep8 compliance issue in the latest trunk patch (just an extra space, no need to review) and fixed two more dtests that were failing on trunk. If CI is good I will commit. > cqlsh COPY FROM fails for null values with non-prepared statements > -- > > Key: CASSANDRA-11631 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11631 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Robert Stupp >Assignee: Stefania >Priority: Minor > Labels: cqlsh > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > > cqlsh's {{COPY FROM ... WITH PREPAREDSTATEMENTS = False}} fails if the row > contains null values. Reason is that the {{','.join(r)}} in > {{make_non_prepared_batch_statement}} doesn't seem to handle {{None}}, which > results in this error message. > {code} > Failed to import 1 rows: TypeError - sequence item 2: expected string, > NoneType found, given up without retries > {code} > Attached patch should fix the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257472#comment-15257472 ] Chris Lohfink commented on CASSANDRA-11654: --- +1, get a {code} "cells" : [ ] {code} with tombstones but I think thats good. > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Yuki Morishita > Labels: Tools > Fix For: 3.0.x, 3.x > > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-11654: -- Comment: was deleted (was: ah derp, ignore me :) yukims got it) > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Yuki Morishita > Labels: Tools > Fix For: 3.0.x, 3.x > > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink updated CASSANDRA-11654: -- Comment: was deleted (was: reproducing your steps you get the following for the Data file (i disabled compaction though for ease of reading): {code} 00 04 00 00 00 01 7F FF FF FF 80 00 00 00 00 00 00 00 34 00 02 63 31 0F 12 E0 92 7C FE E0 4D C1 82 00 08 00 00 00 64 01 {code} {code} PARTITION --- Partition key size: 00 04 Partition key: 00 00 00 01 local deletion time: 7F FF FF FF marked for deletion at: 80 00 00 00 00 00 00 00 ROW @18 --- Flags: 34 - HAS_TIMESTAMP, HAS_DELETION, HAS_ALL_COLUMNS Size: 00 02 Key: 63 31 (c1) Row Size: 0F Prev: 12 timestamp (delta from minTimestamp in encoding): E0 92 7C FE - 9600254 deletion time: E0 4D C1 82 00 COLUMN - Flags: 08 hasValue, useRowTimestamp Value: 00 00 00 64 - I dont know what this is for: 01 {code} So the sstabledump is printing out whats in the sstable. What is it your expecting to see?) > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Yuki Morishita > Labels: Tools > Fix For: 3.0.x, 3.x > > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257470#comment-15257470 ] Chris Lohfink commented on CASSANDRA-11654: --- ah derp, ignore me :) yukims got it > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Yuki Morishita > Labels: Tools > Fix For: 3.0.x, 3.x > > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257467#comment-15257467 ] Chris Lohfink commented on CASSANDRA-11654: --- reproducing your steps you get the following for the Data file (i disabled compaction though for ease of reading): {code} 00 04 00 00 00 01 7F FF FF FF 80 00 00 00 00 00 00 00 34 00 02 63 31 0F 12 E0 92 7C FE E0 4D C1 82 00 08 00 00 00 64 01 {code} {code} PARTITION --- Partition key size: 00 04 Partition key: 00 00 00 01 local deletion time: 7F FF FF FF marked for deletion at: 80 00 00 00 00 00 00 00 ROW @18 --- Flags: 34 - HAS_TIMESTAMP, HAS_DELETION, HAS_ALL_COLUMNS Size: 00 02 Key: 63 31 (c1) Row Size: 0F Prev: 12 timestamp (delta from minTimestamp in encoding): E0 92 7C FE - 9600254 deletion time: E0 4D C1 82 00 COLUMN - Flags: 08 hasValue, useRowTimestamp Value: 00 00 00 64 - I dont know what this is for: 01 {code} So the sstabledump is printing out whats in the sstable. What is it your expecting to see? > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Yuki Morishita > Labels: Tools > Fix For: 3.0.x, 3.x > > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-11654: --- Labels: Tools (was: ) Assignee: Yuki Morishita Reviewer: Chris Lohfink Fix Version/s: 3.x 3.0.x Reproduced In: 3.0.5, 3.0.4, 3.6 (was: 3.0.4, 3.0.5, 3.6) Status: Patch Available (was: Open) > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Yuki Morishita > Labels: Tools > Fix For: 3.0.x, 3.x > > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257449#comment-15257449 ] Yuki Morishita commented on CASSANDRA-11654: I think we can just dump cells regardless of deletion. patch here: https://github.com/yukim/cassandra/tree/11654-3.0 {code:javascript} [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461635395769314 }, "deletion_info" : { "deletion_time" : 1461635395766379, "tstamp" : 1461635395 }, "cells" : [ { "name" : "val0_int", "value" : "100" } ] } ] } ] {code} Similar to what we get with {{sstabledump -d}}: {code} [1]@0 Row[info=[ts=1461635395769314] del=deletedAt=1461635395766379, localDeletion=1461635395 ]: c1 | [val0_int=100 ts=1461635395769314] {code} > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11641) java.lang.IllegalArgumentException: Not enough bytes in system.log
[ https://issues.apache.org/jira/browse/CASSANDRA-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257428#comment-15257428 ] peng xiao commented on CASSANDRA-11641: --- Thanks,I will try to run nodetool scrub. > java.lang.IllegalArgumentException: Not enough bytes in system.log > -- > > Key: CASSANDRA-11641 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11641 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: centos 6.5 cassandra2.1.13 >Reporter: peng xiao > Attachments: system.log > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-11654: - Reproduced In: 3.0.5, 3.0.4, 3.6 Since Version: 3.0.4 > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "SELECT * FROM testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of > Apr 25, 2016). > Looks like only row tombstone is affecting sstabledump. If you generate cell > tombstones, even if you delete all non-PK & non-static columns in the row, as > long as there is no explicit row delete (so the clustering is still > considered alive), sstabledump will work just fine, see the following example > steps: > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > val1_int int, PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 100, 200);" | cqlsh > echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | > cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', > 300, 400);" | cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461634633566479 }, > "cells" : [ > { "name" : "val0_int", "value" : "300" }, > { "name" : "val1_int", "value" : "400" } > ] > } > ] > } > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-11654: - Description: It is pretty trivial to reproduce. Here are the steps I used (on a single node C* 3.x cluster): {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh echo "DELETE FROM testks.testcf where k=1 and c='c1';" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh nodetool flush testks testcf echo "SELECT * FROM testks.testcf;" | cqlsh {noformat} The last step from above will confirm that there is one live row in the testks.testcf table. However, if you now go to the actual SSTable file directory and run sstabledump like the following, you will see the row is still marked as deleted and no row content is shown: {noformat} $ sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461633248542342 }, "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : 1461633248 } } ] } ] {noformat} This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as of Apr 25, 2016). Looks like only row tombstone is affecting sstabledump. If you generate cell tombstones, even if you delete all non-PK & non-static columns in the row, as long as there is no explicit row delete (so the clustering is still considered alive), sstabledump will work just fine, see the following example steps: {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, val1_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', 100, 200);" | cqlsh echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', 300, 400);" | cqlsh nodetool flush testks testcf echo "select * from testks.testcf;" | cqlsh $ sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461634633566479 }, "cells" : [ { "name" : "val0_int", "value" : "300" }, { "name" : "val1_int", "value" : "400" } ] } ] } ] {noformat} was: It is pretty trivial to reproduce. Here are the steps I used (on a single node C* 3.x cluster): {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh echo "delete from testks.testcf where k=1 and c='c1';" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh nodetool flush testks testcf echo "select * from testks.testcf;" | cqlsh {noformat} The last step from above will confirm that there is one live row in the testks.testcf table. However, if you now go to the actual SSTable file directory and run sstabledump like the following, you will see the row is still marked as deleted and no row content is shown: {noformat} $ sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461633248542342 }, "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : 1461633248 } } ] } ] {noformat} This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as Apr 25, 2016). Looks like only row tombstone is affecting sstabledump. If you generate cell tombstones, even if you delete all non-PK & non-static columns in the row, as long as there is no explicit row delete (so the clustering is still considered alive), sstabledump will work just fine, see the following example steps: {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, val1_int int, PRIMARY KEY
[jira] [Updated] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-11654: - Description: It is pretty trivial to reproduce. Here are the steps I used (on a single node C* 3.x cluster): {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh echo "delete from testks.testcf where k=1 and c='c1';" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh nodetool flush testks testcf echo "select * from testks.testcf;" | cqlsh {noformat} The last step from above will confirm that there is one live row in the testks.testcf table. However, if you now go to the actual SSTable file directory and run sstabledump like the following, you will see the row is still marked as deleted and no row content is shown: {noformat} $ sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461633248542342 }, "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : 1461633248 } } ] } ] {noformat} This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as Apr 25, 2016). Looks like only row tombstone is affecting sstabledump. If you generate cell tombstones, even if you delete all non-PK & non-static columns in the row, as long as there is no explicit row delete (so the clustering is still considered alive), sstabledump will work just fine, see the following example steps: {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, val1_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', 100, 200);" | cqlsh echo "DELETE val0_int, val1_int FROM testks.testcf where k=1 and c='c1';" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int, val1_int) VALUES (1, 'c1', 300, 400);" | cqlsh nodetool flush testks testcf echo "select * from testks.testcf;" | cqlsh $ sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461634633566479 }, "cells" : [ { "name" : "val0_int", "value" : "300" }, { "name" : "val1_int", "value" : "400" } ] } ] } ] {noformat} was: It is pretty trivial to reproduce. Here are the steps I used (on a single node C* 3.x cluster): {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh echo "delete from testks.testcf where k=1 and c='c1';" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh nodetool flush testks testcf echo "select * from testks.testcf;" | cqlsh {noformat} The last step from above will confirm that there is one live row in the testks.testcf table. However, if you now go to the actual SSTable file directory and run sstabledump like the following, you will see the row is still marked as deleted and no row content is shown: {noformat} $ sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461633248542342 }, "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : 1461633248 } } ] } ] {noformat} This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as Apr 25, 2016). > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C*
[jira] [Updated] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-11654: - Summary: sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") row tombstone (was: sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") tombstone) > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") row tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "delete from testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as Apr > 25, 2016). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") tombstone
[ https://issues.apache.org/jira/browse/CASSANDRA-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257394#comment-15257394 ] Wei Deng commented on CASSANDRA-11654: -- BTW, this issue doesn't exist on sstable2json from C* 2.1. However, because sstabledump is a complete rewrite I don't think that information is quite relevant. It just proves that this problem is real for C* 3.x. > sstabledump is not able to properly print out SSTable that may contain > historical (but "shadowed") tombstone > > > Key: CASSANDRA-11654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng > > It is pretty trivial to reproduce. Here are the steps I used (on a single > node C* 3.x cluster): > {noformat} > echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': > 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh > echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, > PRIMARY KEY (k, c) );" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > echo "delete from testks.testcf where k=1 and c='c1';" | cqlsh > echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | > cqlsh > nodetool flush testks testcf > echo "select * from testks.testcf;" | cqlsh > {noformat} > The last step from above will confirm that there is one live row in the > testks.testcf table. However, if you now go to the actual SSTable file > directory and run sstabledump like the following, you will see the row is > still marked as deleted and no row content is shown: > {noformat} > $ sstabledump ma-1-big-Data.db > [ > { > "partition" : { > "key" : [ "1" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "clustering" : [ "c1" ], > "liveness_info" : { "tstamp" : 1461633248542342 }, > "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : > 1461633248 } > } > ] > } > ] > {noformat} > This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as Apr > 25, 2016). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11654) sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") tombstone
Wei Deng created CASSANDRA-11654: Summary: sstabledump is not able to properly print out SSTable that may contain historical (but "shadowed") tombstone Key: CASSANDRA-11654 URL: https://issues.apache.org/jira/browse/CASSANDRA-11654 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Wei Deng It is pretty trivial to reproduce. Here are the steps I used (on a single node C* 3.x cluster): {noformat} echo "CREATE KEYSPACE IF NOT EXISTS testks WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'};" | cqlsh echo "CREATE TABLE IF NOT EXISTS testks.testcf ( k int, c text, val0_int int, PRIMARY KEY (k, c) );" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh echo "delete from testks.testcf where k=1 and c='c1';" | cqlsh echo "INSERT INTO testks.testcf (k, c, val0_int) VALUES (1, 'c1', 100);" | cqlsh nodetool flush testks testcf echo "select * from testks.testcf;" | cqlsh {noformat} The last step from above will confirm that there is one live row in the testks.testcf table. However, if you now go to the actual SSTable file directory and run sstabledump like the following, you will see the row is still marked as deleted and no row content is shown: {noformat} $ sstabledump ma-1-big-Data.db [ { "partition" : { "key" : [ "1" ], "position" : 0 }, "rows" : [ { "type" : "row", "position" : 18, "clustering" : [ "c1" ], "liveness_info" : { "tstamp" : 1461633248542342 }, "deletion_info" : { "deletion_time" : 1461633248212499, "tstamp" : 1461633248 } } ] } ] {noformat} This is reproduced in both latest 3.0.5 and 3.6-snapshot (i.e. trunk as Apr 25, 2016). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
svn commit: r13421 - in /release/cassandra: 2.1.14/ 2.2.6/ 3.3/ debian/dists/21x/ debian/dists/21x/main/binary-amd64/ debian/dists/21x/main/binary-i386/ debian/dists/21x/main/source/ debian/dists/22x/
Author: jake Date: Tue Apr 26 01:19:05 2016 New Revision: 13421 Log: 2.1.14 and 2.2.6 Added: release/cassandra/2.1.14/ release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz (with props) release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.asc release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.asc.md5 release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.asc.sha1 release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.md5 release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.sha1 release/cassandra/2.1.14/apache-cassandra-2.1.14-src.tar.gz (with props) release/cassandra/2.1.14/apache-cassandra-2.1.14-src.tar.gz.asc release/cassandra/2.1.14/apache-cassandra-2.1.14-src.tar.gz.asc.md5 release/cassandra/2.1.14/apache-cassandra-2.1.14-src.tar.gz.asc.sha1 release/cassandra/2.1.14/apache-cassandra-2.1.14-src.tar.gz.md5 release/cassandra/2.1.14/apache-cassandra-2.1.14-src.tar.gz.sha1 release/cassandra/2.2.6/ release/cassandra/2.2.6/apache-cassandra-2.2.6-bin.tar.gz (with props) release/cassandra/2.2.6/apache-cassandra-2.2.6-bin.tar.gz.asc release/cassandra/2.2.6/apache-cassandra-2.2.6-bin.tar.gz.asc.md5 release/cassandra/2.2.6/apache-cassandra-2.2.6-bin.tar.gz.asc.sha1 release/cassandra/2.2.6/apache-cassandra-2.2.6-bin.tar.gz.md5 release/cassandra/2.2.6/apache-cassandra-2.2.6-bin.tar.gz.sha1 release/cassandra/2.2.6/apache-cassandra-2.2.6-src.tar.gz (with props) release/cassandra/2.2.6/apache-cassandra-2.2.6-src.tar.gz.asc release/cassandra/2.2.6/apache-cassandra-2.2.6-src.tar.gz.asc.md5 release/cassandra/2.2.6/apache-cassandra-2.2.6-src.tar.gz.asc.sha1 release/cassandra/2.2.6/apache-cassandra-2.2.6-src.tar.gz.md5 release/cassandra/2.2.6/apache-cassandra-2.2.6-src.tar.gz.sha1 release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_2.1.14_all.deb (with props) release/cassandra/debian/pool/main/c/cassandra/cassandra-tools_2.2.6_all.deb (with props) release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.14.diff.gz (with props) release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.14.dsc release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.14.orig.tar.gz (with props) release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.14.orig.tar.gz.asc release/cassandra/debian/pool/main/c/cassandra/cassandra_2.1.14_all.deb (with props) release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.6.diff.gz (with props) release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.6.dsc release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.6.orig.tar.gz (with props) release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.6.orig.tar.gz.asc release/cassandra/debian/pool/main/c/cassandra/cassandra_2.2.6_all.deb (with props) Removed: release/cassandra/3.3/ Modified: release/cassandra/debian/dists/21x/InRelease release/cassandra/debian/dists/21x/Release release/cassandra/debian/dists/21x/Release.gpg release/cassandra/debian/dists/21x/main/binary-amd64/Packages release/cassandra/debian/dists/21x/main/binary-amd64/Packages.gz release/cassandra/debian/dists/21x/main/binary-i386/Packages release/cassandra/debian/dists/21x/main/binary-i386/Packages.gz release/cassandra/debian/dists/21x/main/source/Sources.gz release/cassandra/debian/dists/22x/InRelease release/cassandra/debian/dists/22x/Release release/cassandra/debian/dists/22x/Release.gpg release/cassandra/debian/dists/22x/main/binary-amd64/Packages release/cassandra/debian/dists/22x/main/binary-amd64/Packages.gz release/cassandra/debian/dists/22x/main/binary-i386/Packages release/cassandra/debian/dists/22x/main/binary-i386/Packages.gz release/cassandra/debian/dists/22x/main/source/Sources.gz Added: release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz == Binary file - no diff available. Propchange: release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz -- svn:mime-type = application/octet-stream Added: release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.asc == --- release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.asc (added) +++ release/cassandra/2.1.14/apache-cassandra-2.1.14-bin.tar.gz.asc Tue Apr 26 01:19:05 2016 @@ -0,0 +1,17 @@ +-BEGIN PGP SIGNATURE- +Version: GnuPG v1 + +iQIcBAABAgAGBQJXFp6/AAoJEHSdbuwDU7Es7JYQAKQY8ATY22ws+HcJFQygCYZw +YRwrVZSRcV9RgjojBUNzo9GDtzTUT7xacwAWLWI2pNdwhNCzAeQjvYSbdk8uD/Wc +y9HFJdcyxheFmFjVwIBBgI7edbmxYwAQ0RhoCdmPR2fISH0h22x+8Y2gEdbeCWTv +L38ygUvId639NvrlonBmR8p7fDylg0I4wH6MZdIsIzf9+c8HpTetG3nHB/dZvLMn
[jira] [Updated] (CASSANDRA-11628) Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679
[ https://issues.apache.org/jira/browse/CASSANDRA-11628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-11628: --- Resolution: Fixed Fix Version/s: (was: 3.0.x) (was: 2.2.x) (was: 2.1.x) (was: 3.x) 2.2.7 3.0.6 3.6 2.1.15 Status: Resolved (was: Ready to Commit) Committed as {{c43cf8d79b974c8e68938c86f0cc535b700160fa}}, thanks! > Fix the regression to CASSANDRA-3983 that got introduced by CASSANDRA-10679 > --- > > Key: CASSANDRA-11628 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11628 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Wei Deng >Assignee: Wei Deng > Fix For: 2.1.15, 3.6, 3.0.6, 2.2.7 > > > It appears that the commit from CASSANDRA-10679 accidentally cancelled out > the effect that was originally intended by CASSANDRA-3983. In this case, we > would like to address the following situation: > When you already have a C* package installed (which will deploy a file as > /usr/share/cassandra/cassandra.in.sh), but also attempt to run from a binary > download from http://cassandra.apache.org/download/, many tools like > cassandra-stress, sstablescrub, etal. will search the packaged dir > (/usr/share/cassandra/cassandra.in.sh) for 'cassandra.in.sh' before searching > the dir in your binary download or source build. We should reverse the order > of that search so it checks locally first. Otherwise you will encounter some > error like the following: > {noformat} > root@node0:~/apache-cassandra-3.6-SNAPSHOT# tools/bin/cassandra-stress -h > Error: Could not find or load main class org.apache.cassandra.stress.Stress > {noformat} > {noformat} > root@node0:~/apache-cassandra-3.6-SNAPSHOT# bin/sstableverify -h > Error: Could not find or load main class > org.apache.cassandra.tools.StandaloneVerifier > {noformat} > The goal for CASSANDRA-10679 is still a good one: "For the most part all of > our shell scripts do the same thing, load the cassandra.in.sh and then call > something out of a jar. They should all look the same." But in this case, we > should correct them all to look the same but making them to look local dir > first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[05/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d7c0bc2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d7c0bc2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d7c0bc2 Branch: refs/heads/cassandra-3.0 Commit: 1d7c0bc210532743946ce1929c2ed11551ea4246 Parents: 5f99629 c43cf8d Author: Yuki MorishitaAuthored: Mon Apr 25 20:05:06 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:05:06 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstablekeys | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/json2sstable | 9 + tools/bin/sstable2json | 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d7c0bc2/CHANGES.txt -- diff --cc CHANGES.txt index affb723,4f6a4db..1837a6e --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,9 -1,5 +1,10 @@@ -2.1.15 +2.2.7 + * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) + * cqlsh: COPY FROM should use regular inserts for single statement batches and + report errors correctly if workers processes crash on initialization (CASSANDRA-11474) + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) +Merged from 2.1: + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
[07/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d7c0bc2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d7c0bc2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d7c0bc2 Branch: refs/heads/trunk Commit: 1d7c0bc210532743946ce1929c2ed11551ea4246 Parents: 5f99629 c43cf8d Author: Yuki MorishitaAuthored: Mon Apr 25 20:05:06 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:05:06 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstablekeys | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/json2sstable | 9 + tools/bin/sstable2json | 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d7c0bc2/CHANGES.txt -- diff --cc CHANGES.txt index affb723,4f6a4db..1837a6e --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,9 -1,5 +1,10 @@@ -2.1.15 +2.2.7 + * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) + * cqlsh: COPY FROM should use regular inserts for single statement batches and + report errors correctly if workers processes crash on initialization (CASSANDRA-11474) + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) +Merged from 2.1: + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
[02/10] cassandra git commit: Change order of directory searching for cassandra.in.sh
Change order of directory searching for cassandra.in.sh patch by Wei Deng; reviewed by yukim for CASSANDRA-11628 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c43cf8d7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c43cf8d7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c43cf8d7 Branch: refs/heads/cassandra-2.2 Commit: c43cf8d79b974c8e68938c86f0cc535b700160fa Parents: 07c9fa2 Author: Wei DengAuthored: Mon Apr 25 20:03:27 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:03:27 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstablekeys | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/json2sstable | 9 + tools/bin/sstable2json | 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 53945d6..4f6a4db 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.15 + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/debug-cql -- diff --git a/bin/debug-cql b/bin/debug-cql index b4ebb82..ae9bfe4 100755 --- a/bin/debug-cql +++ b/bin/debug-cql @@ -17,11 +17,11 @@ if [ "x$CASSANDRA_INCLUDE" = "x" ]; then # Locations (in order) to use when searching for an include file. -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname $0`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/nodetool -- diff --git a/bin/nodetool b/bin/nodetool index 0ea078f..b6a6fbf 100755 --- a/bin/nodetool +++ b/bin/nodetool @@ -23,11 +23,12 @@ if [ "`basename "$0"`" = 'nodeprobe' ]; then fi if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +# Locations (in order) to use when searching for an include file. +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname "$0"`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/sstablekeys -- diff --git a/bin/sstablekeys b/bin/sstablekeys index 55b72d9..c0967ef 100755 --- a/bin/sstablekeys +++ b/bin/sstablekeys @@ -17,11 +17,12 @@ # limitations under the License. if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +# Locations (in order) to use when searching for an include file. +for include in "`dirname "$0"`/cassandra.in.sh" \
[01/10] cassandra git commit: Change order of directory searching for cassandra.in.sh
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 07c9fa2ca -> c43cf8d79 refs/heads/cassandra-2.2 5f99629a3 -> 1d7c0bc21 refs/heads/cassandra-3.0 c9285d0e3 -> 251449ffa refs/heads/trunk db7d07cfa -> 73ad99d9d Change order of directory searching for cassandra.in.sh patch by Wei Deng; reviewed by yukim for CASSANDRA-11628 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c43cf8d7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c43cf8d7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c43cf8d7 Branch: refs/heads/cassandra-2.1 Commit: c43cf8d79b974c8e68938c86f0cc535b700160fa Parents: 07c9fa2 Author: Wei DengAuthored: Mon Apr 25 20:03:27 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:03:27 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstablekeys | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/json2sstable | 9 + tools/bin/sstable2json | 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 53945d6..4f6a4db 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.15 + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/debug-cql -- diff --git a/bin/debug-cql b/bin/debug-cql index b4ebb82..ae9bfe4 100755 --- a/bin/debug-cql +++ b/bin/debug-cql @@ -17,11 +17,11 @@ if [ "x$CASSANDRA_INCLUDE" = "x" ]; then # Locations (in order) to use when searching for an include file. -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname $0`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/nodetool -- diff --git a/bin/nodetool b/bin/nodetool index 0ea078f..b6a6fbf 100755 --- a/bin/nodetool +++ b/bin/nodetool @@ -23,11 +23,12 @@ if [ "`basename "$0"`" = 'nodeprobe' ]; then fi if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +# Locations (in order) to use when searching for an include file. +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname "$0"`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/sstablekeys -- diff --git a/bin/sstablekeys b/bin/sstablekeys index 55b72d9..c0967ef 100755 --- a/bin/sstablekeys +++ b/bin/sstablekeys @@ -17,11 +17,12 @@ # limitations under the License. if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ -
[09/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/251449ff Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/251449ff Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/251449ff Branch: refs/heads/trunk Commit: 251449ffade8fd67d9894a3798093c9c426e266d Parents: c9285d0 1d7c0bc Author: Yuki MorishitaAuthored: Mon Apr 25 20:08:24 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:08:24 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + bin/sstableutil | 9 + bin/sstableverify| 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/sstabledump| 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/251449ff/CHANGES.txt -- diff --cc CHANGES.txt index de17a70,1837a6e..ea21ee7 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -14,11 -3,8 +14,12 @@@ Merged from 2.2 * cqlsh: COPY FROM should use regular inserts for single statement batches and report errors correctly if workers processes crash on initialization (CASSANDRA-11474) * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) + * Allow only DISTINCT queries with partition keys restrictions (CASSANDRA-11339) + * CqlConfigHelper no longer requires both a keystore and truststore to work (CASSANDRA-11532) + * Make deprecated repair methods backward-compatible with previous notification service (CASSANDRA-11430) + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462) Merged from 2.1: + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549) http://git-wip-us.apache.org/repos/asf/cassandra/blob/251449ff/bin/sstableutil -- diff --cc bin/sstableutil index 9f07785,000..7457834 mode 100755,00..100755 --- a/bin/sstableutil +++ b/bin/sstableutil @@@ -1,60 -1,0 +1,61 @@@ +#!/bin/sh + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$CASSANDRA_INCLUDE" = "x" ]; then - for include in /usr/share/cassandra/cassandra.in.sh \ -/usr/local/share/cassandra/cassandra.in.sh \ -/opt/cassandra/cassandra.in.sh \ ++# Locations (in order) to use when searching for an include file. ++for include in "`dirname "$0"`/cassandra.in.sh" \ + "$HOME/.cassandra.in.sh" \ -"`dirname "$0"`/cassandra.in.sh"; do ++ /usr/share/cassandra/cassandra.in.sh \ ++ /usr/local/share/cassandra/cassandra.in.sh \ ++ /opt/cassandra/cassandra.in.sh; do +if [ -r "$include" ]; then +. "$include" +break +fi +done +elif [ -r "$CASSANDRA_INCLUDE" ]; then +. "$CASSANDRA_INCLUDE" +fi + +# Use JAVA_HOME if set, otherwise look for java in PATH +if [ -x "$JAVA_HOME/bin/java" ]; then +JAVA="$JAVA_HOME/bin/java" +else +JAVA="`which
[03/10] cassandra git commit: Change order of directory searching for cassandra.in.sh
Change order of directory searching for cassandra.in.sh patch by Wei Deng; reviewed by yukim for CASSANDRA-11628 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c43cf8d7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c43cf8d7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c43cf8d7 Branch: refs/heads/cassandra-3.0 Commit: c43cf8d79b974c8e68938c86f0cc535b700160fa Parents: 07c9fa2 Author: Wei DengAuthored: Mon Apr 25 20:03:27 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:03:27 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstablekeys | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/json2sstable | 9 + tools/bin/sstable2json | 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 53945d6..4f6a4db 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.15 + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/debug-cql -- diff --git a/bin/debug-cql b/bin/debug-cql index b4ebb82..ae9bfe4 100755 --- a/bin/debug-cql +++ b/bin/debug-cql @@ -17,11 +17,11 @@ if [ "x$CASSANDRA_INCLUDE" = "x" ]; then # Locations (in order) to use when searching for an include file. -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname $0`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/nodetool -- diff --git a/bin/nodetool b/bin/nodetool index 0ea078f..b6a6fbf 100755 --- a/bin/nodetool +++ b/bin/nodetool @@ -23,11 +23,12 @@ if [ "`basename "$0"`" = 'nodeprobe' ]; then fi if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +# Locations (in order) to use when searching for an include file. +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname "$0"`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/sstablekeys -- diff --git a/bin/sstablekeys b/bin/sstablekeys index 55b72d9..c0967ef 100755 --- a/bin/sstablekeys +++ b/bin/sstablekeys @@ -17,11 +17,12 @@ # limitations under the License. if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +# Locations (in order) to use when searching for an include file. +for include in "`dirname "$0"`/cassandra.in.sh" \
[08/10] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/251449ff Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/251449ff Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/251449ff Branch: refs/heads/cassandra-3.0 Commit: 251449ffade8fd67d9894a3798093c9c426e266d Parents: c9285d0 1d7c0bc Author: Yuki MorishitaAuthored: Mon Apr 25 20:08:24 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:08:24 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + bin/sstableutil | 9 + bin/sstableverify| 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/sstabledump| 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/251449ff/CHANGES.txt -- diff --cc CHANGES.txt index de17a70,1837a6e..ea21ee7 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -14,11 -3,8 +14,12 @@@ Merged from 2.2 * cqlsh: COPY FROM should use regular inserts for single statement batches and report errors correctly if workers processes crash on initialization (CASSANDRA-11474) * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) + * Allow only DISTINCT queries with partition keys restrictions (CASSANDRA-11339) + * CqlConfigHelper no longer requires both a keystore and truststore to work (CASSANDRA-11532) + * Make deprecated repair methods backward-compatible with previous notification service (CASSANDRA-11430) + * IncomingStreamingConnection version check message wrong (CASSANDRA-11462) Merged from 2.1: + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549) http://git-wip-us.apache.org/repos/asf/cassandra/blob/251449ff/bin/sstableutil -- diff --cc bin/sstableutil index 9f07785,000..7457834 mode 100755,00..100755 --- a/bin/sstableutil +++ b/bin/sstableutil @@@ -1,60 -1,0 +1,61 @@@ +#!/bin/sh + +# Licensed to the Apache Software Foundation (ASF) under one +# or more contributor license agreements. See the NOTICE file +# distributed with this work for additional information +# regarding copyright ownership. The ASF licenses this file +# to you under the Apache License, Version 2.0 (the +# "License"); you may not use this file except in compliance +# with the License. You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +if [ "x$CASSANDRA_INCLUDE" = "x" ]; then - for include in /usr/share/cassandra/cassandra.in.sh \ -/usr/local/share/cassandra/cassandra.in.sh \ -/opt/cassandra/cassandra.in.sh \ ++# Locations (in order) to use when searching for an include file. ++for include in "`dirname "$0"`/cassandra.in.sh" \ + "$HOME/.cassandra.in.sh" \ -"`dirname "$0"`/cassandra.in.sh"; do ++ /usr/share/cassandra/cassandra.in.sh \ ++ /usr/local/share/cassandra/cassandra.in.sh \ ++ /opt/cassandra/cassandra.in.sh; do +if [ -r "$include" ]; then +. "$include" +break +fi +done +elif [ -r "$CASSANDRA_INCLUDE" ]; then +. "$CASSANDRA_INCLUDE" +fi + +# Use JAVA_HOME if set, otherwise look for java in PATH +if [ -x "$JAVA_HOME/bin/java" ]; then +JAVA="$JAVA_HOME/bin/java" +else +
[04/10] cassandra git commit: Change order of directory searching for cassandra.in.sh
Change order of directory searching for cassandra.in.sh patch by Wei Deng; reviewed by yukim for CASSANDRA-11628 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c43cf8d7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c43cf8d7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c43cf8d7 Branch: refs/heads/trunk Commit: c43cf8d79b974c8e68938c86f0cc535b700160fa Parents: 07c9fa2 Author: Wei DengAuthored: Mon Apr 25 20:03:27 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:03:27 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstablekeys | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/json2sstable | 9 + tools/bin/sstable2json | 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 53945d6..4f6a4db 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.15 + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549) http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/debug-cql -- diff --git a/bin/debug-cql b/bin/debug-cql index b4ebb82..ae9bfe4 100755 --- a/bin/debug-cql +++ b/bin/debug-cql @@ -17,11 +17,11 @@ if [ "x$CASSANDRA_INCLUDE" = "x" ]; then # Locations (in order) to use when searching for an include file. -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname $0`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/nodetool -- diff --git a/bin/nodetool b/bin/nodetool index 0ea078f..b6a6fbf 100755 --- a/bin/nodetool +++ b/bin/nodetool @@ -23,11 +23,12 @@ if [ "`basename "$0"`" = 'nodeprobe' ]; then fi if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +# Locations (in order) to use when searching for an include file. +for include in "`dirname "$0"`/cassandra.in.sh" \ "$HOME/.cassandra.in.sh" \ - "`dirname "$0"`/cassandra.in.sh"; do + /usr/share/cassandra/cassandra.in.sh \ + /usr/local/share/cassandra/cassandra.in.sh \ + /opt/cassandra/cassandra.in.sh; do if [ -r "$include" ]; then . "$include" break http://git-wip-us.apache.org/repos/asf/cassandra/blob/c43cf8d7/bin/sstablekeys -- diff --git a/bin/sstablekeys b/bin/sstablekeys index 55b72d9..c0967ef 100755 --- a/bin/sstablekeys +++ b/bin/sstablekeys @@ -17,11 +17,12 @@ # limitations under the License. if [ "x$CASSANDRA_INCLUDE" = "x" ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ +# Locations (in order) to use when searching for an include file. +for include in "`dirname "$0"`/cassandra.in.sh" \
[10/10] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Merge branch 'cassandra-3.0' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/73ad99d9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/73ad99d9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/73ad99d9 Branch: refs/heads/trunk Commit: 73ad99d9df84436261ba8ebf5cd749abcc3bbed0 Parents: db7d07c 251449f Author: Yuki MorishitaAuthored: Mon Apr 25 20:08:51 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:08:51 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + bin/sstableutil | 9 + bin/sstableverify| 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/sstabledump| 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/73ad99d9/CHANGES.txt --
[06/10] cassandra git commit: Merge branch 'cassandra-2.1' into cassandra-2.2
Merge branch 'cassandra-2.1' into cassandra-2.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d7c0bc2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d7c0bc2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d7c0bc2 Branch: refs/heads/cassandra-2.2 Commit: 1d7c0bc210532743946ce1929c2ed11551ea4246 Parents: 5f99629 c43cf8d Author: Yuki MorishitaAuthored: Mon Apr 25 20:05:06 2016 -0500 Committer: Yuki Morishita Committed: Mon Apr 25 20:05:06 2016 -0500 -- CHANGES.txt | 1 + bin/debug-cql| 8 bin/nodetool | 9 + bin/sstablekeys | 9 + bin/sstableloader| 9 + bin/sstablescrub | 9 + bin/sstableupgrade | 9 + tools/bin/cassandra-stress | 9 + tools/bin/cassandra-stressd | 9 + tools/bin/json2sstable | 9 + tools/bin/sstable2json | 9 + tools/bin/sstableexpiredblockers | 9 + tools/bin/sstablelevelreset | 9 + tools/bin/sstablemetadata| 9 + tools/bin/sstableofflinerelevel | 9 + tools/bin/sstablerepairedset | 9 + tools/bin/sstablesplit | 9 + 17 files changed, 80 insertions(+), 64 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d7c0bc2/CHANGES.txt -- diff --cc CHANGES.txt index affb723,4f6a4db..1837a6e --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,9 -1,5 +1,10 @@@ -2.1.15 +2.2.7 + * Avoid calling Iterables::concat in loops during ModificationStatement::getFunctions (CASSANDRA-11621) + * cqlsh: COPY FROM should use regular inserts for single statement batches and + report errors correctly if workers processes crash on initialization (CASSANDRA-11474) + * Always close cluster with connection in CqlRecordWriter (CASSANDRA-11553) +Merged from 2.1: + * Change order of directory searching for cassandra.in.sh to favor local one (CASSANDRA-11628) * cqlsh COPY FROM fails with []{} chars in UDT/tuple fields/values (CASSANDRA-11633) * clqsh: COPY FROM throws TypeError with Cython extensions enabled (CASSANDRA-11574) * cqlsh: COPY FROM ignores NULL values in conversion (CASSANDRA-11549)
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/2.2.6-tentative [deleted] 37f63ecc5
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/cassandra-2.2.6 [created] a7975f385
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/2.1.14-tentative [deleted] 209ebd380
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/cassandra-2.1.14 [created] a98b558c5
[jira] [Commented] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch
[ https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257350#comment-15257350 ] Brandon Williams commented on CASSANDRA-7839: - Hmm, I think we probably should, wrongly bootstrapping a new node is decently dangerous since it changes the topology. Unfortunately that's kind of a one-off check, but it's easy enough to do when we know the ec2 snitch is being used. > Support standard EC2 naming conventions in Ec2Snitch > > > Key: CASSANDRA-7839 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7839 > Project: Cassandra > Issue Type: Improvement >Reporter: Gregory Ramsperger >Assignee: Gregory Ramsperger > Labels: docs-impacting > Attachments: CASSANDRA-7839-aws-naming-conventions.patch > > > The EC2 snitches use datacenter and rack naming conventions inconsistent with > those presented in Amazon EC2 APIs as region and availability zone. A > discussion of this is found in CASSANDRA-4026. This has not been changed for > valid backwards compatibility reasons. Using SnitchProperties, it is possible > to switch between the legacy naming and the full, AWS-style naming. > Proposal: > * introduce a property (ec2_naming_scheme) to switch naming schemes. > * default to current/legacy naming scheme > * add support for a new scheme ("standard") which is consistent AWS > conventions > ** data centers will be the region name, including the number > ** racks will be the availability zone name, including the region name > Examples: > * * legacy* : datacenter is the part of the availability zone name preceding > the last "\-" when the zone ends in \-1 and includes the number if not \-1. > Rack is the portion of the availability zone name following the last "\-". > ** us-west-1a => dc: us-west, rack: 1a > ** us-west-2b => dc: us-west-2, rack: 2b; > * *standard* : datacenter is the part of the availability zone name preceding > zone letter. rack is the entire availability zone name. > ** us-west-1a => dc: us-west-1, rack: us-west-1a > ** us-west-2b => dc: us-west-2, rack: us-west-2b; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11653) dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson reassigned CASSANDRA-11653: --- Assignee: Philip Thompson (was: DS Test Eng) > dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test > > > Key: CASSANDRA-11653 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11653 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Philip Thompson > Labels: dtest > > example failure: > http://cassci.datastax.com/job/cassandra-3.0_dtest/660/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test > Failed on CassCI build cassandra-3.0_dtest #660 > Looks likely to be a test problem, with error message "0 not greater than 0" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11653) dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257347#comment-15257347 ] Philip Thompson commented on CASSANDRA-11653: - I don't know what's going on with the triage process here, I'm assuming a slip in handoff, but this test failure was fixed in CASSANDRA-11497. > dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test > > > Key: CASSANDRA-11653 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11653 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > example failure: > http://cassci.datastax.com/job/cassandra-3.0_dtest/660/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test > Failed on CassCI build cassandra-3.0_dtest #660 > Looks likely to be a test problem, with error message "0 not greater than 0" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11653) dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-11653. - Resolution: Duplicate > dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test > > > Key: CASSANDRA-11653 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11653 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: Philip Thompson > Labels: dtest > > example failure: > http://cassci.datastax.com/job/cassandra-3.0_dtest/660/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test > Failed on CassCI build cassandra-3.0_dtest #660 > Looks likely to be a test problem, with error message "0 not greater than 0" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11573) cqlsh fails with undefined symbol: PyUnicodeUCS2_DecodeUTF8
[ https://issues.apache.org/jira/browse/CASSANDRA-11573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler reassigned CASSANDRA-11573: -- Assignee: Michael Shuler > cqlsh fails with undefined symbol: PyUnicodeUCS2_DecodeUTF8 > --- > > Key: CASSANDRA-11573 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11573 > Project: Cassandra > Issue Type: Bug > Environment: centos 7, datastax ddc 3.5 > installed according to > http://docs.datastax.com/en/cassandra/3.x/cassandra/install/installRHEL.html > JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_77 > Cassandra version: 3.5.0 >Reporter: Oli Schacher >Assignee: Michael Shuler > > trying to run cqlsh produces: > {quote} > cqlsh > Traceback (most recent call last): > File "/usr/bin/cqlsh.py", line 170, in > from cqlshlib.copyutil import ExportTask, ImportTask > ImportError: /usr/lib/python2.7/site-packages/cqlshlib/copyutil.so: undefined > symbol: PyUnicodeUCS2_DecodeUTF8 > {quote} > with 3.4 the error does not happen. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11573) cqlsh fails with undefined symbol: PyUnicodeUCS2_DecodeUTF8
[ https://issues.apache.org/jira/browse/CASSANDRA-11573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257305#comment-15257305 ] Stefania commented on CASSANDRA-11573: -- It is safe to remove cqlshlib/copyutil.so and cqlshlib/copyutil.c. The root cause is most likely the same as for Debian packages, see details in CASSANDRA-11630. I'm trying to find out who is responsible for the Datastax RPM packages so they can be fixed. > cqlsh fails with undefined symbol: PyUnicodeUCS2_DecodeUTF8 > --- > > Key: CASSANDRA-11573 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11573 > Project: Cassandra > Issue Type: Bug > Environment: centos 7, datastax ddc 3.5 > installed according to > http://docs.datastax.com/en/cassandra/3.x/cassandra/install/installRHEL.html > JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_77 > Cassandra version: 3.5.0 >Reporter: Oli Schacher > > trying to run cqlsh produces: > {quote} > cqlsh > Traceback (most recent call last): > File "/usr/bin/cqlsh.py", line 170, in > from cqlshlib.copyutil import ExportTask, ImportTask > ImportError: /usr/lib/python2.7/site-packages/cqlshlib/copyutil.so: undefined > symbol: PyUnicodeUCS2_DecodeUTF8 > {quote} > with 3.4 the error does not happen. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch
[ https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-7839: --- Labels: docs-impacting (was: ) > Support standard EC2 naming conventions in Ec2Snitch > > > Key: CASSANDRA-7839 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7839 > Project: Cassandra > Issue Type: Improvement >Reporter: Gregory Ramsperger >Assignee: Gregory Ramsperger > Labels: docs-impacting > Attachments: CASSANDRA-7839-aws-naming-conventions.patch > > > The EC2 snitches use datacenter and rack naming conventions inconsistent with > those presented in Amazon EC2 APIs as region and availability zone. A > discussion of this is found in CASSANDRA-4026. This has not been changed for > valid backwards compatibility reasons. Using SnitchProperties, it is possible > to switch between the legacy naming and the full, AWS-style naming. > Proposal: > * introduce a property (ec2_naming_scheme) to switch naming schemes. > * default to current/legacy naming scheme > * add support for a new scheme ("standard") which is consistent AWS > conventions > ** data centers will be the region name, including the number > ** racks will be the availability zone name, including the region name > Examples: > * * legacy* : datacenter is the part of the availability zone name preceding > the last "\-" when the zone ends in \-1 and includes the number if not \-1. > Rack is the portion of the availability zone name following the last "\-". > ** us-west-1a => dc: us-west, rack: 1a > ** us-west-2b => dc: us-west-2, rack: 2b; > * *standard* : datacenter is the part of the availability zone name preceding > zone letter. rack is the entire availability zone name. > ** us-west-1a => dc: us-west-1, rack: us-west-1a > ** us-west-2b => dc: us-west-2, rack: us-west-2b; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7839) Support standard EC2 naming conventions in Ec2Snitch
[ https://issues.apache.org/jira/browse/CASSANDRA-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257287#comment-15257287 ] Paulo Motta commented on CASSANDRA-7839: Overall LGTM, can you just do two minor changes: * add an "Upgrading" notice to {{NEWS.txt}} that legacy clusters using {{Ec2Snitch}} or {{Ec2MultiRegionSnitch}} must explicitly set {{ec2_naming_scheme=legacy}} on {{cassandra-rackdc.properties}}. * comment-out {{ec2_naming_scheme=standard}} on {{cassandra-rackdc.properties}}. (since it's default) and add a short note to change it to legacy if upgrading from a pre-3.6 cluster using {{Ec2Snitch}} or {{Ec2PropertyFileSnitch}} I'm not sure if this is too over-cautious, but should we add a bootstrap check (similar to {{checkForEndpointCollision}}) failing to boostrap a node if it's using the new scheme and nodes are detected in gossip using the legacy scheme to prevent operator errors going unnoticed? WDYT [~brandon.williams]? (wrongly configured existing nodes will already be prevented to start by CASSANDRA-10242). Submitted a round of CI tests with current patch: ||trunk|| |[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-7839]| |[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-7839-testall/lastCompletedBuild/testReport/]| |[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-7839-dtest/lastCompletedBuild/testReport/]| > Support standard EC2 naming conventions in Ec2Snitch > > > Key: CASSANDRA-7839 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7839 > Project: Cassandra > Issue Type: Improvement >Reporter: Gregory Ramsperger >Assignee: Gregory Ramsperger > Attachments: CASSANDRA-7839-aws-naming-conventions.patch > > > The EC2 snitches use datacenter and rack naming conventions inconsistent with > those presented in Amazon EC2 APIs as region and availability zone. A > discussion of this is found in CASSANDRA-4026. This has not been changed for > valid backwards compatibility reasons. Using SnitchProperties, it is possible > to switch between the legacy naming and the full, AWS-style naming. > Proposal: > * introduce a property (ec2_naming_scheme) to switch naming schemes. > * default to current/legacy naming scheme > * add support for a new scheme ("standard") which is consistent AWS > conventions > ** data centers will be the region name, including the number > ** racks will be the availability zone name, including the region name > Examples: > * * legacy* : datacenter is the part of the availability zone name preceding > the last "\-" when the zone ends in \-1 and includes the number if not \-1. > Rack is the portion of the availability zone name following the last "\-". > ** us-west-1a => dc: us-west, rack: 1a > ** us-west-2b => dc: us-west-2, rack: 2b; > * *standard* : datacenter is the part of the availability zone name preceding > zone letter. rack is the entire availability zone name. > ** us-west-1a => dc: us-west-1, rack: us-west-1a > ** us-west-2b => dc: us-west-2, rack: us-west-2b; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11653) dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test
Russ Hatch created CASSANDRA-11653: -- Summary: dtest failure in sstableutil_test.SSTableUtilTest.abortedcompaction_test Key: CASSANDRA-11653 URL: https://issues.apache.org/jira/browse/CASSANDRA-11653 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng example failure: http://cassci.datastax.com/job/cassandra-3.0_dtest/660/testReport/sstableutil_test/SSTableUtilTest/abortedcompaction_test Failed on CassCI build cassandra-3.0_dtest #660 Looks likely to be a test problem, with error message "0 not greater than 0" -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5863) In process (uncompressed) page cache
[ https://issues.apache.org/jira/browse/CASSANDRA-5863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257210#comment-15257210 ] Pavel Yaskevich commented on CASSANDRA-5863: bq. The key itself is a small and fixed part of the overhead (all objects it references are already found elsewhere); there are also on-heap support structures within the implementing cache which are bigger. Though that's not trivial, we could also account for those, but I don't know how that helps cache management and sizing for the user. The problem I see with this is the same for any other data structure on JVM - if we don't account for additional overhead at some point it will blow up and it won't be pretty, especially if we don't account for internal size of the data structure which holds the cache and other overhead like keys and their containers, can we claim with certainty that at some capacity it's actual size in memory is not going to be 2x or 3x? If yes then let's leave it like it is today otherwise we need to do something about it right away. bq. I'm sorry, I do not understand the problem – the code only relies on the position of the buffer and since buffer is cleared before the read, an end of stream (and only that) will result in an empty buffer; both read() and readByte() interpret this correctly. Sorry, what I mean - we might want to be more conservative and indicate early that requested length is bigger than number of available bytes, we already had couple of bugs which where hard to debug because EOFException doesn't provide any useful information... bq. I had added a return of the passed buffer for convenience but it also adds possibility for error – changed the return of the method to void. On the other point, it does not make sense for the callee to return an (aligned) offset as the caller may need to have a better control over positioning before allocating the buffer – caching rebufferers, specifically, do. and bq. This wasn't the case even before this ticket. When RAR requests rebuffering at a certain position, it can either have its buffer filled (direct case), or receive a view of a shared buffer that holds the data (mem-mapped case). There was a lot of clumsiness in RAR to handle the question of which of these is the case, does it own its buffer, should it be allocated or freed. The patch addresses this clumsiness as well as allowing for another type of advantageous buffer management. I understand, I actually started with proposition to return "void" but I changed it later on because I saw a possibility to unify bufferless with other implementations because essentially the question is - where original data comes from - directly from the channel or already mmap'ed buffer, so maybe if we had a common interface to both of the cases and used it as a backend for rebufferer it would simplify things instead of putting that logic into rebufferer itself? Just something to think about... bq. Interesting. Another possibility mentioned before is to implement compression in such a way that the compressed size matches the chunk size. Both are orthogonal and outside the scope of this ticket – lets open a new issue for that? I'm fine if we make it a separate ticket but I think we will have to tackle it first since it would directly affect rebufferer/cache logic. > In process (uncompressed) page cache > > > Key: CASSANDRA-5863 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5863 > Project: Cassandra > Issue Type: Sub-task >Reporter: T Jake Luciani >Assignee: Branimir Lambov > Labels: performance > Fix For: 3.x > > > Currently, for every read, the CRAR reads each compressed chunk into a > byte[], sends it to ICompressor, gets back another byte[] and verifies a > checksum. > This process is where the majority of time is spent in a read request. > Before compression, we would have zero-copy of data and could respond > directly from the page-cache. > It would be useful to have some kind of Chunk cache that could speed up this > process for hot data, possibly off heap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz
[ https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257159#comment-15257159 ] sankalp kohli commented on CASSANDRA-10091: --- I think CASSANDRA-9755 is a dupe? > Integrated JMX authn & authz > > > Key: CASSANDRA-10091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10091 > Project: Cassandra > Issue Type: New Feature >Reporter: Jan Karlsson >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > It would be useful to authenticate with JMX through Cassandra's internal > authentication. This would reduce the overhead of keeping passwords in files > on the machine and would consolidate passwords to one location. It would also > allow the possibility to handle JMX permissions in Cassandra. > It could be done by creating our own JMX server and setting custom classes > for the authenticator and authorizer. We could then add some parameters where > the user could specify what authenticator and authorizer to use in case they > want to make their own. > This could also be done by creating a premain method which creates a jmx > server. This would give us the feature without changing the Cassandra code > itself. However I believe this would be a good feature to have in Cassandra. > I am currently working on a solution which creates a JMX server and uses a > custom authenticator and authorizer. It is currently build as a premain, > however it would be great if we could put this in Cassandra instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11258) Repair scheduling - Resource locking API
[ https://issues.apache.org/jira/browse/CASSANDRA-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257156#comment-15257156 ] Paulo Motta commented on CASSANDRA-11258: - The only thing I mildly dislike about the interfaces is that they throw exception when it's not possible to acquire or renew the lock, but since this is quite a common case should we use {{Optional}} and {{boolean}} instead? WDYT about this definition? {noformat} interface Lease { long getExpiration(); boolean renew(long duration) throws LeaseException; boolean cancel() throws LeaseException; boolean isValid(); } interface LeaseFactory { Optional newLease(long duration, String resource, int priority, Mapmetadata) throws LeaseException; } {noformat} We would still throw {{LeaseException}} if some unexpected error occur when trying to acquire, renew or cancel the lock. bq. I think the LeaseMap(mentioned in the JINI lease spec) or a similar interface will be useful for locking multiple data centers. sounds good, but we can probably revisit and extend the library when adding multi-DC support. Also, we should probably add another field "isActive" to {{resource_lease_priority}}, to avoid try acquiring a higher priority lock (and contend on CAS) if a lower priority lock is currently held. > Repair scheduling - Resource locking API > > > Key: CASSANDRA-11258 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11258 > Project: Cassandra > Issue Type: Sub-task >Reporter: Marcus Olsson >Assignee: Marcus Olsson >Priority: Minor > > Create a resource locking API & implementation that is able to lock a > resource in a specified data center. It should handle priorities to avoid > node starvation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11432) Counter values become under-counted when running repair.
[ https://issues.apache.org/jira/browse/CASSANDRA-11432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257121#comment-15257121 ] Dikang Gu commented on CASSANDRA-11432: --- [~iamaleksey], yes, I'm trying to figure out when the repair is causing problems. What I observed: 1. repair generates thousands of smaller sstables in secs, for compaction: SSTables in each level: [966/4, 20/10, 152/100, 33, 0, 0, 0, 0, 0] 2. dropped messages in the log: 2016-04-25_21:35:51.21671 INFO 21:35:51 [ScheduledTasks:1]: MUTATION messages were dropped in last 5000 ms: 0 for internal timeout and 358 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: READ messages were dropped in last 5000 ms: 0 for internal timeout and 90 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: COUNTER_MUTATION messages were dropped in last 5000 ms: 0 for internal timeout and 21 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: Pool Name Active Pending Completed Blocked All Time Blocked 2016-04-25_21:35:51.21798 INFO 21:35:51 [ScheduledTasks:1]: MutationStage 0 0 1009884950 0 0 2016-04-25_21:35:51.21799 2016-04-25_21:35:51.21810 INFO 21:35:51 [ScheduledTasks:1]: ReadStage 0 0 347247977 0 0 2016-04-25_21:35:51.21811 2016-04-25_21:35:51.21828 INFO 21:35:51 [ScheduledTasks:1]: RequestResponseStage 0 0 1070811306 0 0 Do you have any advises about which part of code I should look at? Thanks! > Counter values become under-counted when running repair. > > > Key: CASSANDRA-11432 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11432 > Project: Cassandra > Issue Type: Bug >Reporter: Dikang Gu >Assignee: Aleksey Yeschenko > > We are experimenting Counters in Cassandra 2.2.5. Our setup is that we have 6 > nodes, across three different regions, and in each region, the replication > factor is 2. Basically, each nodes holds a full copy of the data. > We are writing to cluster with CL = 2, and reading with CL = 1. > When are doing 30k/s counter increment/decrement per node, and at the > meanwhile, we are double writing to our mysql tier, so that we can measure > the accuracy of C* counter, compared to mysql. > The experiment result was great at the beginning, the counter value in C* and > mysql are very close. The difference is less than 0.1%. > But when we start to run the repair on one node, the counter value in C* > become much less than the value in mysql, the difference becomes larger than > 1%. > My question is that is it a known problem that the counter value will become > under-counted if repair is running? Should we avoid running repair for > counter tables? > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11432) Counter values become under-counted when running repair.
[ https://issues.apache.org/jira/browse/CASSANDRA-11432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257121#comment-15257121 ] Dikang Gu edited comment on CASSANDRA-11432 at 4/25/16 9:55 PM: [~iamaleksey], yes, I'm trying to figure out why the repair is causing problems. What I observed: 1. repair generates thousands of smaller sstables in secs, for compaction: SSTables in each level: [966/4, 20/10, 152/100, 33, 0, 0, 0, 0, 0] 2. dropped messages in the log: 2016-04-25_21:35:51.21671 INFO 21:35:51 [ScheduledTasks:1]: MUTATION messages were dropped in last 5000 ms: 0 for internal timeout and 358 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: READ messages were dropped in last 5000 ms: 0 for internal timeout and 90 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: COUNTER_MUTATION messages were dropped in last 5000 ms: 0 for internal timeout and 21 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: Pool Name Active Pending Completed Blocked All Time Blocked 2016-04-25_21:35:51.21798 INFO 21:35:51 [ScheduledTasks:1]: MutationStage 0 0 1009884950 0 0 2016-04-25_21:35:51.21799 2016-04-25_21:35:51.21810 INFO 21:35:51 [ScheduledTasks:1]: ReadStage 0 0 347247977 0 0 2016-04-25_21:35:51.21811 2016-04-25_21:35:51.21828 INFO 21:35:51 [ScheduledTasks:1]: RequestResponseStage 0 0 1070811306 0 0 Do you have any advises about which part of code I should look at? Thanks! was (Author: dikanggu): [~iamaleksey], yes, I'm trying to figure out when the repair is causing problems. What I observed: 1. repair generates thousands of smaller sstables in secs, for compaction: SSTables in each level: [966/4, 20/10, 152/100, 33, 0, 0, 0, 0, 0] 2. dropped messages in the log: 2016-04-25_21:35:51.21671 INFO 21:35:51 [ScheduledTasks:1]: MUTATION messages were dropped in last 5000 ms: 0 for internal timeout and 358 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: READ messages were dropped in last 5000 ms: 0 for internal timeout and 90 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: COUNTER_MUTATION messages were dropped in last 5000 ms: 0 for internal timeout and 21 for cross node timeout 2016-04-25_21:35:51.21674 INFO 21:35:51 [ScheduledTasks:1]: Pool Name Active Pending Completed Blocked All Time Blocked 2016-04-25_21:35:51.21798 INFO 21:35:51 [ScheduledTasks:1]: MutationStage 0 0 1009884950 0 0 2016-04-25_21:35:51.21799 2016-04-25_21:35:51.21810 INFO 21:35:51 [ScheduledTasks:1]: ReadStage 0 0 347247977 0 0 2016-04-25_21:35:51.21811 2016-04-25_21:35:51.21828 INFO 21:35:51 [ScheduledTasks:1]: RequestResponseStage 0 0 1070811306 0 0 Do you have any advises about which part of code I should look at? Thanks! > Counter values become under-counted when running repair. > > > Key: CASSANDRA-11432 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11432 > Project: Cassandra > Issue Type: Bug >Reporter: Dikang Gu >Assignee: Aleksey Yeschenko > > We are experimenting Counters in Cassandra 2.2.5. Our setup is that we have 6 > nodes, across three different regions, and in each region, the replication > factor is 2. Basically, each nodes holds a full copy of the data. > We are writing to cluster with CL = 2, and reading with CL = 1. > When are doing 30k/s counter increment/decrement per node, and at the > meanwhile, we are double writing to our mysql tier, so that we can measure > the accuracy of C* counter, compared to mysql. > The experiment result was great at the beginning, the counter value in C* and > mysql are very close. The difference is less than 0.1%. > But when we start to run the repair on one node, the counter value in C* > become much less than the value in mysql, the difference becomes larger than > 1%. > My question is that is it a known problem that the counter value will become > under-counted if repair is running? Should we avoid running repair for > counter tables? > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1
[ https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257114#comment-15257114 ] Blake Eggleston commented on CASSANDRA-11600: - I've updated the linked branches. Changing the conditional that way would cause it to fail in the case that you're using CMS, but haven't set either MAX_HEAP_SIZE or HEAP_NEWSIZE. I did remove the nested conditional by making it an elif block though. Let me know if that works for you. I've also updated cassandra-env.ps1... although this is my first time writing powershell, and I don't have a windows machine to test on. All I can say for sure about that is that I've done my best :) > Don't require HEAP_NEW_SIZE to be set when using G1 > --- > > Key: CASSANDRA-11600 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11600 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 3.6, 3.0.x > > > Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when > using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE > together, and won't start until you do. Since we ignore that setting if > you're using G1, we shouldn't require that the user set it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11652) (windows) dtest failure in replace_address_test.TestReplaceAddress.replace_stopped_node_test
Russ Hatch created CASSANDRA-11652: -- Summary: (windows) dtest failure in replace_address_test.TestReplaceAddress.replace_stopped_node_test Key: CASSANDRA-11652 URL: https://issues.apache.org/jira/browse/CASSANDRA-11652 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng flapping recently, with more failures in recent tests: http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/228/testReport/replace_address_test/TestReplaceAddress/replace_stopped_node_test Failed on CassCI build cassandra-2.2_dtest_win32 #228 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11484) Consistency issues with subsequent writes, deletes and reads
[ https://issues.apache.org/jira/browse/CASSANDRA-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Knighton resolved CASSANDRA-11484. --- Resolution: Not A Problem Thanks for the report [~prashanth123] - I'm unable to reproduce this issue with the environment and test case you've described. This is likely just a case of a mismatch in expectations. You should only expect to find the record deleted if the timestamp of the delete is greater than that of the initial insert for the record. It could be the case that the create and delete are handled by different coordinators whose clocks are out of sync, so that the insert is timestamped after the delete (or a similar situation). If the sstables are available to you, you may be able to confirm this by using a tool like sstable2json to look at the timestamps of the insert and delete. It may be helpful to research Cassandra data models and other techniques (such as client-side timestamps) that can mitigate the above concerns in some circumstances. If you are able to reproduce this still and strongly believe it isn't due to the above timestamp behavior, please reopen and we'll try to track it down farther. > Consistency issues with subsequent writes, deletes and reads > > > Key: CASSANDRA-11484 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11484 > Project: Cassandra > Issue Type: Bug > Environment: Cassandra version: DataStax Enterprise 4.7.1 > Driver version: cassandra-driver-core-2.1.7.1 >Reporter: Prashanth >Assignee: Joel Knighton > Attachments: CassandraDbCheckAppTest.java > > > There have been intermittent failures when the following subsequent queries > are performed on a 4 node cluster: > 1. Insert a few records with consistency level QUORUM > 2. Delete one of the records with consistency level ALL > 3. Retrieve all the records with consistency level QUORUM or ALL and test > that the deleted record does not exist > The tests are failing because the record does not appear to be deleted and a > pattern for the failures couldn't be established. > A snippet of the code is attached to this issue so that the setup/tear down > mechanism can be seen as well. > (Both truncating and dropping the table approaches where used as a tear down > mechanism) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11402) Alignment wrong in tpstats output for PerDiskMemtableFlushWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15257093#comment-15257093 ] Joel Knighton commented on CASSANDRA-11402: --- That looks like the right approach - as I mentioned in the comment above, it would be great if we also fixed the output in org.apache.cassandra.utils.StatusLogger. That said, since he is Assignee, let me check with [~RyanMagnusson] - do you have a problem with [~nkelkar] taking over this ticket or would you prefer to revise your patch based on my comment above? > Alignment wrong in tpstats output for PerDiskMemtableFlushWriter > > > Key: CASSANDRA-11402 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11402 > Project: Cassandra > Issue Type: Bug >Reporter: Joel Knighton >Assignee: Ryan Magnusson >Priority: Trivial > Labels: lhf > Fix For: 3.x > > Attachments: 11402-3_5_patch1.patch, 11402-trunk.txt > > > With the accompanying designation of which memtableflushwriter it is, this > threadpool name is too long for the hardcoded padding in tpstats output. > We should dynamically calculate padding so that we don't need to check this > every time we add a threadpool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11600) Don't require HEAP_NEW_SIZE to be set when using G1
[ https://issues.apache.org/jira/browse/CASSANDRA-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250757#comment-15250757 ] Blake Eggleston edited comment on CASSANDRA-11600 at 4/25/16 9:07 PM: -- | *3.0* | *trunk* | | [branch|https://github.com/bdeggleston/cassandra/tree/11600-3.0-2] | [branch|https://github.com/bdeggleston/cassandra/tree/11600-trunk-2] | | [testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-3.0-2-testall/1/] | [testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-trunk-2-testall/1/] | | [dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-3.0-2-dtest/1/] | [dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-trunk-2-dtest/1/] | commit info: should merge cleanly was (Author: bdeggleston): | *3.0* | *trunk* | | [branch|https://github.com/bdeggleston/cassandra/tree/11600-3.0] | [branch|https://github.com/bdeggleston/cassandra/tree/11600-trunk] | | [testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-3.0-2-testall/1/] | [testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-trunk-2-testall/1/] | | [dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-3.0-2-dtest/1/] | [dtest|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11600-trunk-2-dtest/1/] | commit info: should merge cleanly > Don't require HEAP_NEW_SIZE to be set when using G1 > --- > > Key: CASSANDRA-11600 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11600 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 3.6, 3.0.x > > > Although cassandra-env.sh doesn't set -Xmn (unless set in jvm.options) when > using G1GC, it still requires that you set HEAP_NEW_SIZE and MAX_HEAP_SIZE > together, and won't start until you do. Since we ignore that setting if > you're using G1, we shouldn't require that the user set it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11402) Alignment wrong in tpstats output for PerDiskMemtableFlushWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-11402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nishant Kelkar updated CASSANDRA-11402: --- Attachment: 11402-3_5_patch1.patch Hi [~jkni], uploading a patch as you mentioned in your comment above (I've actually been following this ticket for some time, didn't get to uploading the patch in time before Ryan's solution; sorry about that). Thanks, and let me know if this is fine. > Alignment wrong in tpstats output for PerDiskMemtableFlushWriter > > > Key: CASSANDRA-11402 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11402 > Project: Cassandra > Issue Type: Bug >Reporter: Joel Knighton >Assignee: Ryan Magnusson >Priority: Trivial > Labels: lhf > Fix For: 3.x > > Attachments: 11402-3_5_patch1.patch, 11402-trunk.txt > > > With the accompanying designation of which memtableflushwriter it is, this > threadpool name is too long for the hardcoded padding in tpstats output. > We should dynamically calculate padding so that we don't need to check this > every time we add a threadpool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11648) dtest failure in json_test.FromJsonInsertTests.basic_data_types_test
Russ Hatch created CASSANDRA-11648: -- Summary: dtest failure in json_test.FromJsonInsertTests.basic_data_types_test Key: CASSANDRA-11648 URL: https://issues.apache.org/jira/browse/CASSANDRA-11648 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng example failure: http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/FromJsonInsertTests/basic_data_types_test Failed on CassCI build cassandra-2.2_dtest #585 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11651) dtest failure in json_test.ToJsonSelectTests.basic_data_types_test
Russ Hatch created CASSANDRA-11651: -- Summary: dtest failure in json_test.ToJsonSelectTests.basic_data_types_test Key: CASSANDRA-11651 URL: https://issues.apache.org/jira/browse/CASSANDRA-11651 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng example failure: http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/ToJsonSelectTests/basic_data_types_test Failed on CassCI build cassandra-2.2_dtest #585 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11649) dtest failure in json_test.JsonFullRowInsertSelect.simple_schema_test
Russ Hatch created CASSANDRA-11649: -- Summary: dtest failure in json_test.JsonFullRowInsertSelect.simple_schema_test Key: CASSANDRA-11649 URL: https://issues.apache.org/jira/browse/CASSANDRA-11649 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng example failure: http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/JsonFullRowInsertSelect/simple_schema_test Failed on CassCI build cassandra-2.2_dtest #585 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11650) dtest failure in json_test.ToJsonSelectTests.complex_data_types_test
Russ Hatch created CASSANDRA-11650: -- Summary: dtest failure in json_test.ToJsonSelectTests.complex_data_types_test Key: CASSANDRA-11650 URL: https://issues.apache.org/jira/browse/CASSANDRA-11650 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng example failure: http://cassci.datastax.com/job/cassandra-2.2_dtest/585/testReport/json_test/ToJsonSelectTests/complex_data_types_test Failed on CassCI build cassandra-2.2_dtest #585 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11647) Don't use static dataDirectories field in Directories instances
[ https://issues.apache.org/jira/browse/CASSANDRA-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-11647: -- Reviewer: Aleksey Yeschenko > Don't use static dataDirectories field in Directories instances > --- > > Key: CASSANDRA-11647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11647 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston > Fix For: 3.6 > > > Some of the changes to Directories by CASSANDRA-6696 use the static > {{dataDirectories}} field, instead of the instance field {{paths}}. This > complicates things for external code creating their own Directories instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11647) Don't use static dataDirectories field in Directories instances
[ https://issues.apache.org/jira/browse/CASSANDRA-11647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256972#comment-15256972 ] Blake Eggleston commented on CASSANDRA-11647: - | *trunk* | | [branch|https://github.com/bdeggleston/cassandra/tree/11647] | | [dtests|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-dtest/1/] | | [testall|http://cassci.datastax.com/view/Dev/view/bdeggleston/job/bdeggleston-11647-testall/1/] | > Don't use static dataDirectories field in Directories instances > --- > > Key: CASSANDRA-11647 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11647 > Project: Cassandra > Issue Type: Improvement >Reporter: Blake Eggleston >Assignee: Blake Eggleston > Fix For: 3.6 > > > Some of the changes to Directories by CASSANDRA-6696 use the static > {{dataDirectories}} field, instead of the instance field {{paths}}. This > complicates things for external code creating their own Directories instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11637) Immutable-friendly read consistency level
[ https://issues.apache.org/jira/browse/CASSANDRA-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256949#comment-15256949 ] lvh commented on CASSANDRA-11637: - I have no opinions on the rarity of the use-case for C*, other than "I've hit it with Riak" and "it was important enough for Riak to implement it"; it's entirely possible that it's not worth adding the feature at all, either because it's not worth it in Riak or Riak is sufficiently different from C* in a way that makes this feature useless. The use case is certainly narrow: it's for the subset of things where you care about some amount of low latency, read-(immediately)-after-write, and high availability (since this will return success in more cases than e.g. upgrading the consistency level will), and you're willing to amend your data model (immutability) in order to get it. Many of those seem like Cassandra features, though -- hence why I filed the issue :) The main benefit other than the "works in more cases" and "is faster" benefits outlined above is that solving this at a coordinator level would mean not having to solve it in the application or in each driver. It's of course true that the application itself can do whatever it wants, but that's generally true (and IIRC also true for other consistency levels; having the coordinator do it is mostly just a convenience). You raise an excellent point re: the combinatorial explosion that can result from parametrization. I think the common case (how a hypothetical consistency level would behave) is: 1. Try hitting one node 2. Try hitting QUORUM worth nodes in the local DC 3. Try hitting remaining DCs It should definitely fail after 1 expired TTL (the data is intended to be immutable, and only works for that specific case). Also, for exactly 1 row; this is intended for KV-y things; although that's a more interesting point... I guess the real remaining question is "is it worth implementing the common case as a consistency level" vs "just solve this in the driver/application". It's unclear to me how much parametrization people actually want, and if maybe it's possible to cover 90% of cases with 1 consistency level, which would have the benefit that a lot more people get to use this model faster. > Immutable-friendly read consistency level > -- > > Key: CASSANDRA-11637 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11637 > Project: Cassandra > Issue Type: New Feature >Reporter: lvh >Priority: Minor > > Many Cassandra applications use immutable, append-only models. For those > models, you can accept read consistency {{ONE}}, since the data either exists > (and then it's the data you want) or it doesn't. However, it's possible that > the data hasn't made it to that node yet, so "missing" data might mean > "actually missing" or "not here". > Riak has a convenient read consistency option for this, called notfound_ok > (default true). When false, the first succeeding read will succeed the > operation (a la consistency level {{ONE}}), but a missing read from any node > will keep trying up to the normal consistency level (e.g. {{QUORUM}}). > The workaround for this is for applications to implement an > "UpgradingConsistencyPolicy" (dual to DowngradingConsistencyPolicy) that > tries e.g. {{QUORUM}} after {{ONE}} fails, and then writes with e.g. > {{QUORUM}}. > This is related to CASSANDRA-9779; but it seems that ticket only explores the > compaction/collation/materialized view angle, not the fast & safe read > consistency angle. > Thanks to [~jjirsa]] for helping me dig through this, find the related > ticket, and confirm Cassandra currently does not support this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-4658) Explore improved vnode-aware replication strategy
[ https://issues.apache.org/jira/browse/CASSANDRA-4658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256943#comment-15256943 ] Jeremy Hanna commented on CASSANDRA-4658: - The resolution of duplicate is only for improving the existing support. The improvement has to do with hot spotting as a result of the random token assignment as talked about in the description. An obvious way to do that is to explore using a distribution factor - hence the resolution of duplicate. To be clear, the replication itself with vnodes + racks works fine with the caveat that you need an equal number of nodes in each rack to avoid hotspotting. That is, with the way that replication happens within a replica set, it will ensure that all replicas are not in a single rack. > Explore improved vnode-aware replication strategy > - > > Key: CASSANDRA-4658 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4658 > Project: Cassandra > Issue Type: New Feature >Affects Versions: 1.2.0 beta 1 >Reporter: Nick Bailey > > It doesn't look like the current vnode placement strategy will work with > people using NTS and multiple racks. > For reasons also described on CASSANDRA-3810, using racks and NTS requires > tokens to alternate racks around the ring in order to get an even > distribution of data. The current solution for upgrading/placing vnodes won't > take this into account and will likely generate some hotspots around the > ring. > Not sure what the best solution is. The two immediately obvious approaches > appear to be quite complicated at first. > * Fixing NTS to remove the requirement for rack ordering > ** No idea how this would be accomplished > ** Presents challenges for people upgrading. Would need to deprecate NTS for > a new strategy that replaces it, then have a clear upgrade path to that > strategy which would need to be in a pre 1.2 release. > * Changing vnode placement strategy > ** Ordering vnodes would require quite a bit of additional logic. Adding a > new node or rack would also need to maintain ordering which could cause > enough data movement to remove any benefits vnodes have already. > ** We could potentially adjust vnode token placement to offset any imbalances > caused by NTS but this would require a fairly intelligent mechanism for > determining vnode placement. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11647) Don't use static dataDirectories field in Directories instances
Blake Eggleston created CASSANDRA-11647: --- Summary: Don't use static dataDirectories field in Directories instances Key: CASSANDRA-11647 URL: https://issues.apache.org/jira/browse/CASSANDRA-11647 Project: Cassandra Issue Type: Improvement Reporter: Blake Eggleston Assignee: Blake Eggleston Fix For: 3.6 Some of the changes to Directories by CASSANDRA-6696 use the static {{dataDirectories}} field, instead of the instance field {{paths}}. This complicates things for external code creating their own Directories instances. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10853) deb package migration to dh_python2
[ https://issues.apache.org/jira/browse/CASSANDRA-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-10853: --- Status: Patch Available (was: In Progress) > deb package migration to dh_python2 > --- > > Key: CASSANDRA-10853 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10853 > Project: Cassandra > Issue Type: Task > Components: Packaging >Reporter: Michael Shuler >Assignee: Michael Shuler > Fix For: 3.0.x, 3.x > > Attachments: 10853_2.1.txt, 10853_3.0.txt > > > I'm working on a deb job in jenkins, and I had forgotten to open a bug for > this. There is no urgent need, since {{python-support}} is in Jessie, but > this package is currently in transition to be removed. > http://deb.li/dhs2p > During deb build: > {noformat} > dh_pysupport: This program is deprecated, you should use dh_python2 instead. > Migration guide: http://deb.li/dhs2p > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11489) DynamicCompositeType failures during 2.1 to 3.0 upgrade.
[ https://issues.apache.org/jira/browse/CASSANDRA-11489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256902#comment-15256902 ] Aleksey Yeschenko commented on CASSANDRA-11489: --- Can you still reproduce this, [~ht...@datastax.com]? > DynamicCompositeType failures during 2.1 to 3.0 upgrade. > > > Key: CASSANDRA-11489 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11489 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Jeremiah Jordan >Assignee: Aleksey Yeschenko > Fix For: 3.0.x, 3.x > > > When upgrading from 2.1.13 to 3.0.4+some (hash > 70eab633f289eb1e4fbe47b3e17ff3203337f233) we are seeing the following > exceptions on 2.1 nodes after other nodes have been upgraded. With tables > using DynamicCompositeType in use. The workload runs fine once everything is > upgraded. > {code} > ERROR [MessagingService-Incoming-/10.200.182.2] 2016-04-03 21:49:10,531 > CassandraDaemon.java:229 - Exception in thread > Thread[MessagingService-Incoming-/10.200.182.2,5,main] > java.lang.RuntimeException: java.nio.charset.MalformedInputException: Input > length = 1 > at > org.apache.cassandra.db.marshal.DynamicCompositeType.getAndAppendComparator(DynamicCompositeType.java:181) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.marshal.AbstractCompositeType.getString(AbstractCompositeType.java:200) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.cql3.ColumnIdentifier.(ColumnIdentifier.java:54) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.composites.SimpleSparseCellNameType.fromByteBuffer(SimpleSparseCellNameType.java:83) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:398) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:382) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.RangeTombstoneList$Serializer.deserialize(RangeTombstoneList.java:843) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.DeletionInfo$Serializer.deserialize(DeletionInfo.java:407) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:105) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:89) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at org.apache.cassandra.db.Row$RowSerializer.deserialize(Row.java:73) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:116) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:88) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:195) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:172) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:88) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > Caused by: java.nio.charset.MalformedInputException: Input length = 1 > at java.nio.charset.CoderResult.throwException(CoderResult.java:281) > ~[na:1.8.0_40] > at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:816) > ~[na:1.8.0_40] > at > org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:152) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:109) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > at > org.apache.cassandra.db.marshal.DynamicCompositeType.getAndAppendComparator(DynamicCompositeType.java:169) > ~[cassandra-all-2.1.13.1131.jar:2.1.13.1131] > ... 16 common frames omitted > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10853) deb package migration to dh_python2
[ https://issues.apache.org/jira/browse/CASSANDRA-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-10853: --- Reviewer: T Jake Luciani > deb package migration to dh_python2 > --- > > Key: CASSANDRA-10853 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10853 > Project: Cassandra > Issue Type: Task > Components: Packaging >Reporter: Michael Shuler >Assignee: Michael Shuler > Fix For: 2.1.15, 3.6, 3.0.6, 2.2.7 > > Attachments: 10853_2.1.txt, 10853_3.0.txt > > > I'm working on a deb job in jenkins, and I had forgotten to open a bug for > this. There is no urgent need, since {{python-support}} is in Jessie, but > this package is currently in transition to be removed. > http://deb.li/dhs2p > During deb build: > {noformat} > dh_pysupport: This program is deprecated, you should use dh_python2 instead. > Migration guide: http://deb.li/dhs2p > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10853) deb package migration to dh_python2
[ https://issues.apache.org/jira/browse/CASSANDRA-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-10853: --- Fix Version/s: (was: 3.0.x) (was: 3.x) 2.2.7 3.0.6 3.6 2.1.15 > deb package migration to dh_python2 > --- > > Key: CASSANDRA-10853 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10853 > Project: Cassandra > Issue Type: Task > Components: Packaging >Reporter: Michael Shuler >Assignee: Michael Shuler > Fix For: 2.1.15, 3.6, 3.0.6, 2.2.7 > > Attachments: 10853_2.1.txt, 10853_3.0.txt > > > I'm working on a deb job in jenkins, and I had forgotten to open a bug for > this. There is no urgent need, since {{python-support}} is in Jessie, but > this package is currently in transition to be removed. > http://deb.li/dhs2p > During deb build: > {noformat} > dh_pysupport: This program is deprecated, you should use dh_python2 instead. > Migration guide: http://deb.li/dhs2p > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10853) deb package migration to dh_python2
[ https://issues.apache.org/jira/browse/CASSANDRA-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-10853: --- Attachment: 10853_3.0.txt 10853_2.1.txt attached for cassandra-2.1 and cassandra-2.2 branches 10853_3.0.txt attached for cassandra-3.0 and trunk branches > deb package migration to dh_python2 > --- > > Key: CASSANDRA-10853 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10853 > Project: Cassandra > Issue Type: Task > Components: Packaging >Reporter: Michael Shuler >Assignee: Michael Shuler > Fix For: 3.0.x, 3.x > > Attachments: 10853_2.1.txt, 10853_3.0.txt > > > I'm working on a deb job in jenkins, and I had forgotten to open a bug for > this. There is no urgent need, since {{python-support}} is in Jessie, but > this package is currently in transition to be removed. > http://deb.li/dhs2p > During deb build: > {noformat} > dh_pysupport: This program is deprecated, you should use dh_python2 instead. > Migration guide: http://deb.li/dhs2p > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10853) deb package migration to dh_python2
[ https://issues.apache.org/jira/browse/CASSANDRA-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-10853: --- Attachment: 10853_2.1.txt > deb package migration to dh_python2 > --- > > Key: CASSANDRA-10853 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10853 > Project: Cassandra > Issue Type: Task > Components: Packaging >Reporter: Michael Shuler >Assignee: Michael Shuler > Fix For: 3.0.x, 3.x > > Attachments: 10853_2.1.txt > > > I'm working on a deb job in jenkins, and I had forgotten to open a bug for > this. There is no urgent need, since {{python-support}} is in Jessie, but > this package is currently in transition to be removed. > http://deb.li/dhs2p > During deb build: > {noformat} > dh_pysupport: This program is deprecated, you should use dh_python2 instead. > Migration guide: http://deb.li/dhs2p > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10853) deb package migration to dh_python2
[ https://issues.apache.org/jira/browse/CASSANDRA-10853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-10853: --- Attachment: (was: 10853_minimal_wip.txt) > deb package migration to dh_python2 > --- > > Key: CASSANDRA-10853 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10853 > Project: Cassandra > Issue Type: Task > Components: Packaging >Reporter: Michael Shuler >Assignee: Michael Shuler > Fix For: 3.0.x, 3.x > > > I'm working on a deb job in jenkins, and I had forgotten to open a bug for > this. There is no urgent need, since {{python-support}} is in Jessie, but > this package is currently in transition to be removed. > http://deb.li/dhs2p > During deb build: > {noformat} > dh_pysupport: This program is deprecated, you should use dh_python2 instead. > Migration guide: http://deb.li/dhs2p > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-8467) Monitoring UDFs
[ https://issues.apache.org/jira/browse/CASSANDRA-8467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher Batey reassigned CASSANDRA-8467: Assignee: Christopher Batey > Monitoring UDFs > --- > > Key: CASSANDRA-8467 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8467 > Project: Cassandra > Issue Type: New Feature > Components: Observability >Reporter: Robert Stupp >Assignee: Christopher Batey >Priority: Minor > Labels: tracing, udf > > This thicket's about to add UDF executions to session-tracing. > Tracing these parameters for UDF invocations could become very interesting. > * name of UDF > * # of invocations > * # of rejected executions > * min/max/avg execution times > "Rejected executions" would count UDFs that are not executed because an input > parameter is null/empty (CASSANDRA-8374). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz
[ https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256844#comment-15256844 ] Nate McCall commented on CASSANDRA-10091: - No worries! Thanks for the update. > Integrated JMX authn & authz > > > Key: CASSANDRA-10091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10091 > Project: Cassandra > Issue Type: New Feature >Reporter: Jan Karlsson >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > It would be useful to authenticate with JMX through Cassandra's internal > authentication. This would reduce the overhead of keeping passwords in files > on the machine and would consolidate passwords to one location. It would also > allow the possibility to handle JMX permissions in Cassandra. > It could be done by creating our own JMX server and setting custom classes > for the authenticator and authorizer. We could then add some parameters where > the user could specify what authenticator and authorizer to use in case they > want to make their own. > This could also be done by creating a premain method which creates a jmx > server. This would give us the feature without changing the Cassandra code > itself. However I believe this would be a good feature to have in Cassandra. > I am currently working on a solution which creates a JMX server and uses a > custom authenticator and authorizer. It is currently build as a premain, > however it would be great if we could put this in Cassandra instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11642) sstabledump and sstableverify need to be added to deb packages
[ https://issues.apache.org/jira/browse/CASSANDRA-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-11642: --- Fix Version/s: 2.2.7 > sstabledump and sstableverify need to be added to deb packages > -- > > Key: CASSANDRA-11642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11642 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: OS: Debian > Cassandra 3.5 >Reporter: Attila Szucs >Assignee: T Jake Luciani > Fix For: 3.6, 3.0.6, 2.2.7 > > Attachments: CASSANDRA-11642.txt, CASSANDRA-11642_2.2.txt > > > Command-line tool sstabledump is not installed on Debian. > I used the following source: > {code} > deb http://www.apache.org/dist/cassandra/debian 35x main > {code} > with the following installation commands: > {code} > sudo apt-get install cassandra > sudo apt-get install cassandra-tools > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11642) sstabledump is not installed with cassandra-tools
[ https://issues.apache.org/jira/browse/CASSANDRA-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-11642: --- Attachment: CASSANDRA-11642_2.2.txt CASSANDRA-11642_2.2.txt attached for cassandra-2.2 branch CASSANDRA-11642.txt is for cassandra-3.0 and up > sstabledump is not installed with cassandra-tools > - > > Key: CASSANDRA-11642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11642 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: OS: Debian > Cassandra 3.5 >Reporter: Attila Szucs >Assignee: T Jake Luciani > Fix For: 3.6, 3.0.6 > > Attachments: CASSANDRA-11642.txt, CASSANDRA-11642_2.2.txt > > > Command-line tool sstabledump is not installed on Debian. > I used the following source: > {code} > deb http://www.apache.org/dist/cassandra/debian 35x main > {code} > with the following installation commands: > {code} > sudo apt-get install cassandra > sudo apt-get install cassandra-tools > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11642) sstabledump and sstableverify need to be added to deb packages
[ https://issues.apache.org/jira/browse/CASSANDRA-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-11642: --- Summary: sstabledump and sstableverify need to be added to deb packages (was: sstabledump is not installed with cassandra-tools) > sstabledump and sstableverify need to be added to deb packages > -- > > Key: CASSANDRA-11642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11642 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: OS: Debian > Cassandra 3.5 >Reporter: Attila Szucs >Assignee: T Jake Luciani > Fix For: 3.6, 3.0.6 > > Attachments: CASSANDRA-11642.txt, CASSANDRA-11642_2.2.txt > > > Command-line tool sstabledump is not installed on Debian. > I used the following source: > {code} > deb http://www.apache.org/dist/cassandra/debian 35x main > {code} > with the following installation commands: > {code} > sudo apt-get install cassandra > sudo apt-get install cassandra-tools > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz
[ https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256837#comment-15256837 ] Sam Tunnicliffe commented on CASSANDRA-10091: - [~zznate] sorry, the delay in committing is on me. I was planning to do a final bit of testing with non-openjdk jvms before commit, but getting that set up has taken a bit longer than I'd hoped. I'll commit in the next day or two & open a follow up ticket if I haven't managed to do it by then. > Integrated JMX authn & authz > > > Key: CASSANDRA-10091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10091 > Project: Cassandra > Issue Type: New Feature >Reporter: Jan Karlsson >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > It would be useful to authenticate with JMX through Cassandra's internal > authentication. This would reduce the overhead of keeping passwords in files > on the machine and would consolidate passwords to one location. It would also > allow the possibility to handle JMX permissions in Cassandra. > It could be done by creating our own JMX server and setting custom classes > for the authenticator and authorizer. We could then add some parameters where > the user could specify what authenticator and authorizer to use in case they > want to make their own. > This could also be done by creating a premain method which creates a jmx > server. This would give us the feature without changing the Cassandra code > itself. However I believe this would be a good feature to have in Cassandra. > I am currently working on a solution which creates a JMX server and uses a > custom authenticator and authorizer. It is currently build as a premain, > however it would be great if we could put this in Cassandra instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9935) Repair fails with RuntimeException
[ https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-9935: - Status: Ready to Commit (was: Patch Available) > Repair fails with RuntimeException > -- > > Key: CASSANDRA-9935 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9935 > Project: Cassandra > Issue Type: Bug > Environment: C* 2.1.8, Debian Wheezy >Reporter: mlowicki >Assignee: Paulo Motta > Fix For: 2.1.x > > Attachments: 9935.patch, db1.sync.lati.osa.cassandra.log, > db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, > system.log.10.210.3.221, system.log.10.210.3.230 > > > We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade > to 2.1.8 it started to work faster but now it fails with: > {code} > ... > [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde > for range (-5474076923322749342,-5468600594078911162] finished > [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde > for range (-8631877858109464676,-8624040066373718932] finished > [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde > for range (-5372806541854279315,-5369354119480076785] finished > [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde > for range (8166489034383821955,8168408930184216281] finished > [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde > for range (6084602890817326921,6088328703025510057] finished > [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde > for range (-781874602493000830,-781745173070807746] finished > [2015-07-29 20:44:03,957] Repair command #4 finished > error: nodetool failed, check server logs > -- StackTrace -- > java.lang.RuntimeException: nodetool failed, check server logs > at > org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290) > at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202) > {code} > After running: > {code} > nodetool repair --partitioner-range --parallel --in-local-dc sync > {code} > Last records in logs regarding repair are: > {code} > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range > (-7695808664784761779,-7693529816291585568] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range > (806371695398849,8065203836608925992] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range > (-5474076923322749342,-5468600594078911162] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range > (-8631877858109464676,-8624040066373718932] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range > (-5372806541854279315,-5369354119480076785] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range > (8166489034383821955,8168408930184216281] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range > (6084602890817326921,6088328703025510057] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range > (-781874602493000830,-781745173070807746] finished > {code} > but a bit above I see (at least two times in attached log): > {code} > ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - > Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range > (5765414319217852786,5781018794516851576] failed with error > org.apache.cassandra.exceptions.RepairException: [repair > #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, > (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162 > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.apache.cassandra.exceptions.RepairException: [repair > #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, > (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162 > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > [na:1.7.0_80] > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > [na:1.7.0_80] > at > org.apache.cassandra.service.StorageService$4.runMayThrow(StorageService.java:2950) >
[jira] [Commented] (CASSANDRA-9935) Repair fails with RuntimeException
[ https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256832#comment-15256832 ] Marcus Eriksson commented on CASSANDRA-9935: yeah, I'll commit tomorrow, got distracted by CASSANDRA-11625 > Repair fails with RuntimeException > -- > > Key: CASSANDRA-9935 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9935 > Project: Cassandra > Issue Type: Bug > Environment: C* 2.1.8, Debian Wheezy >Reporter: mlowicki >Assignee: Paulo Motta > Fix For: 2.1.x > > Attachments: 9935.patch, db1.sync.lati.osa.cassandra.log, > db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, > system.log.10.210.3.221, system.log.10.210.3.230 > > > We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade > to 2.1.8 it started to work faster but now it fails with: > {code} > ... > [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde > for range (-5474076923322749342,-5468600594078911162] finished > [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde > for range (-8631877858109464676,-8624040066373718932] finished > [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde > for range (-5372806541854279315,-5369354119480076785] finished > [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde > for range (8166489034383821955,8168408930184216281] finished > [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde > for range (6084602890817326921,6088328703025510057] finished > [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde > for range (-781874602493000830,-781745173070807746] finished > [2015-07-29 20:44:03,957] Repair command #4 finished > error: nodetool failed, check server logs > -- StackTrace -- > java.lang.RuntimeException: nodetool failed, check server logs > at > org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290) > at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202) > {code} > After running: > {code} > nodetool repair --partitioner-range --parallel --in-local-dc sync > {code} > Last records in logs regarding repair are: > {code} > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range > (-7695808664784761779,-7693529816291585568] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range > (806371695398849,8065203836608925992] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range > (-5474076923322749342,-5468600594078911162] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range > (-8631877858109464676,-8624040066373718932] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range > (-5372806541854279315,-5369354119480076785] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range > (8166489034383821955,8168408930184216281] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range > (6084602890817326921,6088328703025510057] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range > (-781874602493000830,-781745173070807746] finished > {code} > but a bit above I see (at least two times in attached log): > {code} > ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - > Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range > (5765414319217852786,5781018794516851576] failed with error > org.apache.cassandra.exceptions.RepairException: [repair > #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, > (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162 > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.apache.cassandra.exceptions.RepairException: [repair > #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, > (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162 > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > [na:1.7.0_80] > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > [na:1.7.0_80] > at >
[jira] [Commented] (CASSANDRA-11638) Add cassandra-stress test source
[ https://issues.apache.org/jira/browse/CASSANDRA-11638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256818#comment-15256818 ] Christopher Batey commented on CASSANDRA-11638: --- Review now. Will add more tests as I work on features. > Add cassandra-stress test source > > > Key: CASSANDRA-11638 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11638 > Project: Cassandra > Issue Type: Test > Components: Tools >Reporter: Christopher Batey >Assignee: Christopher Batey >Priority: Minor > Labels: stress > Fix For: 3.x > > Attachments: > 0001-Add-a-test-source-directory-for-Cassandra-stress.patch > > > This adds a test root for cassandra-stress and a couple of noddy tests for a > jira I did last week to prove it works / fails the build if they fail. > I put the source in {{tools/stress/test/unit}} and the classes in > {{build/test/stress-classes}} (rather than putting them in with the main test > classes). > Patch attached or at: https://github.com/chbatey/cassandra-1/tree/stress-tests -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10091) Integrated JMX authn & authz
[ https://issues.apache.org/jira/browse/CASSANDRA-10091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256805#comment-15256805 ] Nate McCall commented on CASSANDRA-10091: - [~tjake] Quick ping on patch accept status? > Integrated JMX authn & authz > > > Key: CASSANDRA-10091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10091 > Project: Cassandra > Issue Type: New Feature >Reporter: Jan Karlsson >Assignee: Sam Tunnicliffe >Priority: Minor > Fix For: 3.x > > > It would be useful to authenticate with JMX through Cassandra's internal > authentication. This would reduce the overhead of keeping passwords in files > on the machine and would consolidate passwords to one location. It would also > allow the possibility to handle JMX permissions in Cassandra. > It could be done by creating our own JMX server and setting custom classes > for the authenticator and authorizer. We could then add some parameters where > the user could specify what authenticator and authorizer to use in case they > want to make their own. > This could also be done by creating a premain method which creates a jmx > server. This would give us the feature without changing the Cassandra code > itself. However I believe this would be a good feature to have in Cassandra. > I am currently working on a solution which creates a JMX server and uses a > custom authenticator and authorizer. It is currently build as a premain, > however it would be great if we could put this in Cassandra instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11623) Compactions w/ Short Rows Spending Time in getOnDiskFilePointer
[ https://issues.apache.org/jira/browse/CASSANDRA-11623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256781#comment-15256781 ] Tom Petracca commented on CASSANDRA-11623: -- Looks good to me. Any particular reason you don't want to push to 2.2? > Compactions w/ Short Rows Spending Time in getOnDiskFilePointer > --- > > Key: CASSANDRA-11623 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11623 > Project: Cassandra > Issue Type: Improvement >Reporter: Tom Petracca >Assignee: Tom Petracca >Priority: Minor > Fix For: 3.x > > Attachments: compactiontask_profile.png > > > Been doing some performance tuning and profiling of my cassandra cluster and > noticed that compaction speeds for my tables that I know to have very short > rows were going particularly slowly. Profiling shows a ton of time being > spent in BigTableWriter.getOnDiskFilePointer(), and attaching strace to a > CompactionTask shows that a majority of time is being spent lseek (called by > getOnDiskFilePointer), and not read or write. > Going deeper it looks like we call getOnDiskFilePointer each row (sometimes > multiple times per row) in order to see if we've reached our expected sstable > size and should start a new writer. This is pretty unnecessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11646) SSTableWriter output discrepancy
[ https://issues.apache.org/jira/browse/CASSANDRA-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-11646: --- Assignee: Alex Petrov > SSTableWriter output discrepancy > > > Key: CASSANDRA-11646 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11646 > Project: Cassandra > Issue Type: Bug >Reporter: T Jake Luciani >Assignee: Alex Petrov > Fix For: 3.6 > > > Since CASSANDRA-10624 there is a non-trivial difference in the size of the > output in CQLSSTableWriter. > I've written the following code: > {code} > String KS = "cql_keyspace"; > String TABLE = "table1"; > File tempdir = Files.createTempDir(); > File dataDir = new File(tempdir.getAbsolutePath() + File.separator + > KS + File.separator + TABLE); > assert dataDir.mkdirs(); > String schema = "CREATE TABLE cql_keyspace.table1 (" > + " k int PRIMARY KEY," > + " v1 text," > + " v2 int" > + ");";// with compression = {};"; > String insert = "INSERT INTO cql_keyspace.table1 (k, v1, v2) VALUES > (?, ?, ?)"; > CQLSSTableWriter writer = CQLSSTableWriter.builder() > .sorted() > .inDirectory(dataDir) > .forTable(schema) > .using(insert).build(); > for (int i = 0; i < 1000; i++) > writer.addRow(i, "test1", 24); > writer.close(); > {code} > Pre CASSANDRA-10624 the data file is ~63MB. Post it's ~69MB -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11646) SSTableWriter output discrepancy
T Jake Luciani created CASSANDRA-11646: -- Summary: SSTableWriter output discrepancy Key: CASSANDRA-11646 URL: https://issues.apache.org/jira/browse/CASSANDRA-11646 Project: Cassandra Issue Type: Bug Reporter: T Jake Luciani Fix For: 3.6 Since CASSANDRA-10624 there is a non-trivial difference in the size of the output in CQLSSTableWriter. I've written the following code: {code} String KS = "cql_keyspace"; String TABLE = "table1"; File tempdir = Files.createTempDir(); File dataDir = new File(tempdir.getAbsolutePath() + File.separator + KS + File.separator + TABLE); assert dataDir.mkdirs(); String schema = "CREATE TABLE cql_keyspace.table1 (" + " k int PRIMARY KEY," + " v1 text," + " v2 int" + ");";// with compression = {};"; String insert = "INSERT INTO cql_keyspace.table1 (k, v1, v2) VALUES (?, ?, ?)"; CQLSSTableWriter writer = CQLSSTableWriter.builder() .sorted() .inDirectory(dataDir) .forTable(schema) .using(insert).build(); for (int i = 0; i < 1000; i++) writer.addRow(i, "test1", 24); writer.close(); {code} Pre CASSANDRA-10624 the data file is ~63MB. Post it's ~69MB -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11645) (single) dtest failure in snapshot_test.TestArchiveCommitlog.test_archive_commitlog_with_active_commitlog
Russ Hatch created CASSANDRA-11645: -- Summary: (single) dtest failure in snapshot_test.TestArchiveCommitlog.test_archive_commitlog_with_active_commitlog Key: CASSANDRA-11645 URL: https://issues.apache.org/jira/browse/CASSANDRA-11645 Project: Cassandra Issue Type: Test Reporter: Russ Hatch Assignee: DS Test Eng This was a singular but pretty recent failure, so thought it might be worth digging into to see if it repros. http://cassci.datastax.com/job/cassandra-2.1_dtest_jdk8/211/testReport/snapshot_test/TestArchiveCommitlog/test_archive_commitlog_with_active_commitlog Failed on CassCI build cassandra-2.1_dtest_jdk8 #211 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11638) Add cassandra-stress test source
[ https://issues.apache.org/jira/browse/CASSANDRA-11638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256736#comment-15256736 ] Joel Knighton commented on CASSANDRA-11638: --- Thanks [~chbatey]. Do you want me to review this now (and set it to 'Patch Available') or are you interested in making more changes? > Add cassandra-stress test source > > > Key: CASSANDRA-11638 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11638 > Project: Cassandra > Issue Type: Test > Components: Tools >Reporter: Christopher Batey >Assignee: Christopher Batey >Priority: Minor > Labels: stress > Fix For: 3.x > > Attachments: > 0001-Add-a-test-source-directory-for-Cassandra-stress.patch > > > This adds a test root for cassandra-stress and a couple of noddy tests for a > jira I did last week to prove it works / fails the build if they fail. > I put the source in {{tools/stress/test/unit}} and the classes in > {{build/test/stress-classes}} (rather than putting them in with the main test > classes). > Patch attached or at: https://github.com/chbatey/cassandra-1/tree/stress-tests -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11637) Immutable-friendly read consistency level
[ https://issues.apache.org/jira/browse/CASSANDRA-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256696#comment-15256696 ] Robert Stupp commented on CASSANDRA-11637: -- Yea - I get your point. If you can do it in the driver, that's fine. But I still think that the use-case is a rare one - it assumes that data _is_ present and the request accidentally hit the EC time window. It sounds easy, but taking all combinations into account, I think it can become very complicated. What happens if the data cannot be found in the local DC? Will it try all nodes in the next DC, the DC after that and so on and so on? Shall it limit and fail after X nodes and/or Y DCs? Should it wait for some time and/or retry the local DC before asking a remote DC? Shall it stop if it finds an expired TTL? What happens if you expect 5 CQL rows in a partition but only get 4? > Immutable-friendly read consistency level > -- > > Key: CASSANDRA-11637 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11637 > Project: Cassandra > Issue Type: New Feature >Reporter: lvh >Priority: Minor > > Many Cassandra applications use immutable, append-only models. For those > models, you can accept read consistency {{ONE}}, since the data either exists > (and then it's the data you want) or it doesn't. However, it's possible that > the data hasn't made it to that node yet, so "missing" data might mean > "actually missing" or "not here". > Riak has a convenient read consistency option for this, called notfound_ok > (default true). When false, the first succeeding read will succeed the > operation (a la consistency level {{ONE}}), but a missing read from any node > will keep trying up to the normal consistency level (e.g. {{QUORUM}}). > The workaround for this is for applications to implement an > "UpgradingConsistencyPolicy" (dual to DowngradingConsistencyPolicy) that > tries e.g. {{QUORUM}} after {{ONE}} fails, and then writes with e.g. > {{QUORUM}}. > This is related to CASSANDRA-9779; but it seems that ticket only explores the > compaction/collation/materialized view angle, not the fast & safe read > consistency angle. > Thanks to [~jjirsa]] for helping me dig through this, find the related > ticket, and confirm Cassandra currently does not support this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11642) sstabledump is not installed with cassandra-tools
[ https://issues.apache.org/jira/browse/CASSANDRA-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256656#comment-15256656 ] Michael Shuler commented on CASSANDRA-11642: Clarification: - sstabledump is cassandra-3.0 branch and up - sstableverify is cassandra-2.2 branch and up > sstabledump is not installed with cassandra-tools > - > > Key: CASSANDRA-11642 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11642 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: OS: Debian > Cassandra 3.5 >Reporter: Attila Szucs >Assignee: T Jake Luciani > Fix For: 3.6, 3.0.6 > > Attachments: CASSANDRA-11642.txt > > > Command-line tool sstabledump is not installed on Debian. > I used the following source: > {code} > deb http://www.apache.org/dist/cassandra/debian 35x main > {code} > with the following installation commands: > {code} > sudo apt-get install cassandra > sudo apt-get install cassandra-tools > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9935) Repair fails with RuntimeException
[ https://issues.apache.org/jira/browse/CASSANDRA-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256641#comment-15256641 ] Aleksey Yeschenko commented on CASSANDRA-9935: -- Are we good to go on this yet? > Repair fails with RuntimeException > -- > > Key: CASSANDRA-9935 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9935 > Project: Cassandra > Issue Type: Bug > Environment: C* 2.1.8, Debian Wheezy >Reporter: mlowicki >Assignee: Paulo Motta > Fix For: 2.1.x > > Attachments: 9935.patch, db1.sync.lati.osa.cassandra.log, > db5.sync.lati.osa.cassandra.log, system.log.10.210.3.117, > system.log.10.210.3.221, system.log.10.210.3.230 > > > We had problems with slow repair in 2.1.7 (CASSANDRA-9702) but after upgrade > to 2.1.8 it started to work faster but now it fails with: > {code} > ... > [2015-07-29 20:44:03,956] Repair session 23a811b0-3632-11e5-a93e-4963524a8bde > for range (-5474076923322749342,-5468600594078911162] finished > [2015-07-29 20:44:03,957] Repair session 336f8740-3632-11e5-a93e-4963524a8bde > for range (-8631877858109464676,-8624040066373718932] finished > [2015-07-29 20:44:03,957] Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde > for range (-5372806541854279315,-5369354119480076785] finished > [2015-07-29 20:44:03,957] Repair session 59f129f0-3632-11e5-a93e-4963524a8bde > for range (8166489034383821955,8168408930184216281] finished > [2015-07-29 20:44:03,957] Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde > for range (6084602890817326921,6088328703025510057] finished > [2015-07-29 20:44:03,957] Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde > for range (-781874602493000830,-781745173070807746] finished > [2015-07-29 20:44:03,957] Repair command #4 finished > error: nodetool failed, check server logs > -- StackTrace -- > java.lang.RuntimeException: nodetool failed, check server logs > at > org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:290) > at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:202) > {code} > After running: > {code} > nodetool repair --partitioner-range --parallel --in-local-dc sync > {code} > Last records in logs regarding repair are: > {code} > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 09ff9e40-3632-11e5-a93e-4963524a8bde for range > (-7695808664784761779,-7693529816291585568] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 17d8d860-3632-11e5-a93e-4963524a8bde for range > (806371695398849,8065203836608925992] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 23a811b0-3632-11e5-a93e-4963524a8bde for range > (-5474076923322749342,-5468600594078911162] finished > INFO [Thread-173887] 2015-07-29 20:44:03,956 StorageService.java:2952 - > Repair session 336f8740-3632-11e5-a93e-4963524a8bde for range > (-8631877858109464676,-8624040066373718932] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 4ccd8430-3632-11e5-a93e-4963524a8bde for range > (-5372806541854279315,-5369354119480076785] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 59f129f0-3632-11e5-a93e-4963524a8bde for range > (8166489034383821955,8168408930184216281] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 6ae7a9a0-3632-11e5-a93e-4963524a8bde for range > (6084602890817326921,6088328703025510057] finished > INFO [Thread-173887] 2015-07-29 20:44:03,957 StorageService.java:2952 - > Repair session 8938e4a0-3632-11e5-a93e-4963524a8bde for range > (-781874602493000830,-781745173070807746] finished > {code} > but a bit above I see (at least two times in attached log): > {code} > ERROR [Thread-173887] 2015-07-29 20:44:03,853 StorageService.java:2959 - > Repair session 1b07ea50-3608-11e5-a93e-4963524a8bde for range > (5765414319217852786,5781018794516851576] failed with error > org.apache.cassandra.exceptions.RepairException: [repair > #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, > (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162 > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.apache.cassandra.exceptions.RepairException: [repair > #1b07ea50-3608-11e5-a93e-4963524a8bde on sync/entity_by_id2, > (5765414319217852786,5781018794516851576]] Validation failed in /10.195.15.162 > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > [na:1.7.0_80] > at java.util.concurrent.FutureTask.get(FutureTask.java:188) > [na:1.7.0_80] > at >
[jira] [Commented] (CASSANDRA-11637) Immutable-friendly read consistency level
[ https://issues.apache.org/jira/browse/CASSANDRA-11637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256605#comment-15256605 ] lvh commented on CASSANDRA-11637: - If you include the drivers in C* and solve it there; convenience so that people don't have to write it themselves ;-) More seriously; Riak produces different behavior here; "dropping" to a higher read consistency level (e.g. QUORUM) is different because QUORUM still tries to connect to QUORUM nodes instead of having the "first-try-wins" behavior. In a healthy cluster, those are almost equivalent, since your read is going to succeed in approximately the highest latency for a node in the quroum set to respond, whereas it could have succeeded in approximately the lowest latency for a node in the quorum set to respond. This is particularly interesting because it fails better when your cluster is on fire. I'm familiar with but not an expert in Cassandra's internals (and, unfortunately that knowledge is a few years old now), but it does seem that that part would be at the coordinator level. It seems like a driver could get the best possible behavior as well by contacting nodes directly to emulate this behavior. > Immutable-friendly read consistency level > -- > > Key: CASSANDRA-11637 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11637 > Project: Cassandra > Issue Type: New Feature >Reporter: lvh >Priority: Minor > > Many Cassandra applications use immutable, append-only models. For those > models, you can accept read consistency {{ONE}}, since the data either exists > (and then it's the data you want) or it doesn't. However, it's possible that > the data hasn't made it to that node yet, so "missing" data might mean > "actually missing" or "not here". > Riak has a convenient read consistency option for this, called notfound_ok > (default true). When false, the first succeeding read will succeed the > operation (a la consistency level {{ONE}}), but a missing read from any node > will keep trying up to the normal consistency level (e.g. {{QUORUM}}). > The workaround for this is for applications to implement an > "UpgradingConsistencyPolicy" (dual to DowngradingConsistencyPolicy) that > tries e.g. {{QUORUM}} after {{ONE}} fails, and then writes with e.g. > {{QUORUM}}. > This is related to CASSANDRA-9779; but it seems that ticket only explores the > compaction/collation/materialized view angle, not the fast & safe read > consistency angle. > Thanks to [~jjirsa]] for helping me dig through this, find the related > ticket, and confirm Cassandra currently does not support this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11539) dtest failure in topology_test.TestTopology.movement_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russ Hatch updated CASSANDRA-11539: --- Reviewer: (was: Jim Witschey) > dtest failure in topology_test.TestTopology.movement_test > - > > Key: CASSANDRA-11539 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11539 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Michael Shuler >Assignee: Russ Hatch > Labels: dtest > Fix For: 3.x > > > example failure: > {noformat} > Error Message > values not within 16.00% of the max: (335.88, 404.31) () > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-XGOyDd > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'num_tokens': None, > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/topology_test.py", line 93, in > movement_test > assert_almost_equal(sizes[1], sizes[2]) > File "/home/automaton/cassandra-dtest/assertions.py", line 75, in > assert_almost_equal > assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within > %.2f%% of the max: %s (%s)" % (error * 100, args, error_message) > "values not within 16.00% of the max: (335.88, 404.31) > ()\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-XGOyDd\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'num_tokens': None,\n'phi_convict_threshold': 5,\n > 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': > 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\n- >> end captured logging << > -" > {noformat} > http://cassci.datastax.com/job/cassandra-3.5_novnode_dtest/22/testReport/topology_test/TestTopology/movement_test > > I dug through this test's history on the trunk, 3.5, 3.0, and 2.2 branches. > It appears this test is stable and passing on 3.0 & 2.2 (which could be just > luck). On trunk & 3.5, however, this test has flapped a small number of times. > The test's threshold is 16% and I found test failures in the 3.5 branch of > 16.2%, 16.9%, and 18.3%. In trunk I found 17.4% and 23.5% diff failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11539) dtest failure in topology_test.TestTopology.movement_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russ Hatch updated CASSANDRA-11539: --- Reviewer: Jim Witschey Status: Patch Available (was: Open) > dtest failure in topology_test.TestTopology.movement_test > - > > Key: CASSANDRA-11539 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11539 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Michael Shuler >Assignee: Russ Hatch > Labels: dtest > Fix For: 3.x > > > example failure: > {noformat} > Error Message > values not within 16.00% of the max: (335.88, 404.31) () > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-XGOyDd > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'num_tokens': None, > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/topology_test.py", line 93, in > movement_test > assert_almost_equal(sizes[1], sizes[2]) > File "/home/automaton/cassandra-dtest/assertions.py", line 75, in > assert_almost_equal > assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within > %.2f%% of the max: %s (%s)" % (error * 100, args, error_message) > "values not within 16.00% of the max: (335.88, 404.31) > ()\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-XGOyDd\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'num_tokens': None,\n'phi_convict_threshold': 5,\n > 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': > 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\n- >> end captured logging << > -" > {noformat} > http://cassci.datastax.com/job/cassandra-3.5_novnode_dtest/22/testReport/topology_test/TestTopology/movement_test > > I dug through this test's history on the trunk, 3.5, 3.0, and 2.2 branches. > It appears this test is stable and passing on 3.0 & 2.2 (which could be just > luck). On trunk & 3.5, however, this test has flapped a small number of times. > The test's threshold is 16% and I found test failures in the 3.5 branch of > 16.2%, 16.9%, and 18.3%. In trunk I found 17.4% and 23.5% diff failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11539) dtest failure in topology_test.TestTopology.movement_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256592#comment-15256592 ] Russ Hatch commented on CASSANDRA-11539: Latest run had zero failures, so I think the patch is good improvement. https://github.com/riptano/cassandra-dtest/pull/951 > dtest failure in topology_test.TestTopology.movement_test > - > > Key: CASSANDRA-11539 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11539 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Michael Shuler >Assignee: Russ Hatch > Labels: dtest > Fix For: 3.x > > > example failure: > {noformat} > Error Message > values not within 16.00% of the max: (335.88, 404.31) () > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: /mnt/tmp/dtest-XGOyDd > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'num_tokens': None, > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > - >> end captured logging << - > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/topology_test.py", line 93, in > movement_test > assert_almost_equal(sizes[1], sizes[2]) > File "/home/automaton/cassandra-dtest/assertions.py", line 75, in > assert_almost_equal > assert vmin > vmax * (1.0 - error) or vmin == vmax, "values not within > %.2f%% of the max: %s (%s)" % (error * 100, args, error_message) > "values not within 16.00% of the max: (335.88, 404.31) > ()\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /mnt/tmp/dtest-XGOyDd\ndtest: DEBUG: Custom init_config not found. Setting > defaults.\ndtest: DEBUG: Done setting configuration options:\n{ > 'num_tokens': None,\n'phi_convict_threshold': 5,\n > 'range_request_timeout_in_ms': 1,\n'read_request_timeout_in_ms': > 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\n- >> end captured logging << > -" > {noformat} > http://cassci.datastax.com/job/cassandra-3.5_novnode_dtest/22/testReport/topology_test/TestTopology/movement_test > > I dug through this test's history on the trunk, 3.5, 3.0, and 2.2 branches. > It appears this test is stable and passing on 3.0 & 2.2 (which could be just > luck). On trunk & 3.5, however, this test has flapped a small number of times. > The test's threshold is 16% and I found test failures in the 3.5 branch of > 16.2%, 16.9%, and 18.3%. In trunk I found 17.4% and 23.5% diff failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11611) dtest failure in topology_test.TestTopology.crash_during_decommission_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256583#comment-15256583 ] Paulo Motta commented on CASSANDRA-11611: - It seems some crazy/slow scheduling can cause node2 to be killed *before* {{decommission}} starts, so besides ignoring stream errors we should also make sure {{nodetool decommission}} has started before killing node2. > dtest failure in topology_test.TestTopology.crash_during_decommission_test > -- > > Key: CASSANDRA-11611 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11611 > Project: Cassandra > Issue Type: Test >Reporter: Jim Witschey >Assignee: DS Test Eng > Labels: dtest, windows > > Looks like some kind of streaming error. Example failure: > http://cassci.datastax.com/job/trunk_dtest_win32/382/testReport/topology_test/TestTopology/crash_during_decommission_test > Failed on CassCI build trunk_dtest_win32 #382 > {code} > Error Message > Unexpected error in log, see stdout > >> begin captured logging << > dtest: DEBUG: cluster ccm directory: d:\temp\dtest-ce_wos > dtest: DEBUG: Custom init_config not found. Setting defaults. > dtest: DEBUG: Done setting configuration options: > { 'initial_token': None, > 'num_tokens': '32', > 'phi_convict_threshold': 5, > 'range_request_timeout_in_ms': 1, > 'read_request_timeout_in_ms': 1, > 'request_timeout_in_ms': 1, > 'truncate_request_timeout_in_ms': 1, > 'write_request_timeout_in_ms': 1} > dtest: DEBUG: Status as reported by node 127.0.0.2 > dtest: DEBUG: Datacenter: datacenter1 > > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- AddressLoad Tokens Owns (effective) Host ID > Rack > UN 127.0.0.1 98.73 KiB 32 78.4% > b8c55c71-bf3d-462b-8c17-3c88d7ac2284 rack1 > UN 127.0.0.2 162.38 KiB 32 65.9% > 71aacf1d-8e2f-44cf-b354-f10c71313ec6 rack1 > UN 127.0.0.3 98.71 KiB 32 55.7% > 3a4529a3-dc7f-445c-aec3-94417c920fdf rack1 > dtest: DEBUG: Restarting node2 > dtest: DEBUG: Status as reported by node 127.0.0.2 > dtest: DEBUG: Datacenter: datacenter1 > > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- AddressLoad Tokens Owns (effective) Host ID > Rack > UL 127.0.0.1 98.73 KiB 32 78.4% > b8c55c71-bf3d-462b-8c17-3c88d7ac2284 rack1 > UN 127.0.0.2 222.26 KiB 32 65.9% > 71aacf1d-8e2f-44cf-b354-f10c71313ec6 rack1 > UN 127.0.0.3 98.71 KiB 32 55.7% > 3a4529a3-dc7f-445c-aec3-94417c920fdf rack1 > dtest: DEBUG: Restarting node2 > dtest: DEBUG: Status as reported by node 127.0.0.2 > dtest: DEBUG: Datacenter: datacenter1 > > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- AddressLoad Tokens Owns (effective) Host ID > Rack > UL 127.0.0.1 174.2 KiB 32 78.4% > b8c55c71-bf3d-462b-8c17-3c88d7ac2284 rack1 > UN 127.0.0.2 336.69 KiB 32 65.9% > 71aacf1d-8e2f-44cf-b354-f10c71313ec6 rack1 > UN 127.0.0.3 116.7 KiB 32 55.7% > 3a4529a3-dc7f-445c-aec3-94417c920fdf rack1 > dtest: DEBUG: Restarting node2 > dtest: DEBUG: Status as reported by node 127.0.0.2 > dtest: DEBUG: Datacenter: datacenter1 > > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- AddressLoad Tokens Owns (effective) Host ID > Rack > UL 127.0.0.1 174.2 KiB 32 78.4% > b8c55c71-bf3d-462b-8c17-3c88d7ac2284 rack1 > UN 127.0.0.2 360.82 KiB 32 65.9% > 71aacf1d-8e2f-44cf-b354-f10c71313ec6 rack1 > UN 127.0.0.3 116.7 KiB 32 55.7% > 3a4529a3-dc7f-445c-aec3-94417c920fdf rack1 > dtest: DEBUG: Restarting node2 > dtest: DEBUG: Status as reported by node 127.0.0.2 > dtest: DEBUG: Datacenter: datacenter1 > > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- AddressLoad Tokens Owns (effective) Host ID > Rack > UL 127.0.0.1 174.2 KiB 32 78.4% > b8c55c71-bf3d-462b-8c17-3c88d7ac2284 rack1 > UN 127.0.0.2 240.54 KiB 32 65.9% > 71aacf1d-8e2f-44cf-b354-f10c71313ec6 rack1 > UN 127.0.0.3 116.7 KiB 32 55.7% > 3a4529a3-dc7f-445c-aec3-94417c920fdf rack1 > dtest: DEBUG: Restarting node2 > dtest: DEBUG: Decommission failed with exception: Nodetool
[jira] [Comment Edited] (CASSANDRA-11641) java.lang.IllegalArgumentException: Not enough bytes in system.log
[ https://issues.apache.org/jira/browse/CASSANDRA-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256581#comment-15256581 ] Paulo Motta edited comment on CASSANDRA-11641 at 4/25/16 4:53 PM: -- It seems you have a corrupted sstable on {{system.compactions_in_progress}} table, most likely {{system-compactions_in_progress-ka-11383}} or {{system-compactions_in_progress-ka-11384}}. Can you try scrubbing ({{nodetool scrub}}) these sstables or the {{system.compactions_in_progress}} table and see if it helps? If online scrub does not work you may try stopping the node and running [offline scrub|https://engineering.gosquared.com/dealing-corrupt-sstable-cassandra]. was (Author: pauloricardomg): It seems you have a corrupted sstable on {{system.compactions_in_progress}} table, most likely {{system-compactions_in_progress-ka-11383}} or {{system-compactions_in_progress-ka-11384}}. Can you try scrubbing ({{nodetool scrub}}) these sstables or the {{system.compactions_in_progress}} table and see if it helps? If online scrub does not work you may try > java.lang.IllegalArgumentException: Not enough bytes in system.log > -- > > Key: CASSANDRA-11641 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11641 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: centos 6.5 cassandra2.1.13 >Reporter: peng xiao > Attachments: system.log > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11641) java.lang.IllegalArgumentException: Not enough bytes in system.log
[ https://issues.apache.org/jira/browse/CASSANDRA-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256581#comment-15256581 ] Paulo Motta commented on CASSANDRA-11641: - It seems you have a corrupted sstable on {{system.compactions_in_progress}} table, most likely {{system-compactions_in_progress-ka-11383}} or {{system-compactions_in_progress-ka-11384}}. Can you try scrubbing ({{nodetool scrub}}) these sstables or the {{system.compactions_in_progress}} table and see if it helps? If online scrub does not work you may try > java.lang.IllegalArgumentException: Not enough bytes in system.log > -- > > Key: CASSANDRA-11641 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11641 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: centos 6.5 cassandra2.1.13 >Reporter: peng xiao > Attachments: system.log > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10134) Always require replace_address to replace existing address
[ https://issues.apache.org/jira/browse/CASSANDRA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15256555#comment-15256555 ] Sam Tunnicliffe commented on CASSANDRA-10134: - Regarding {{DynamicEndpointSnitch}}, I came to the same conclusion as you, that the coupling was there to control the init order more than anything else and that in all likelihood was no longer necessary. I was keen to avoid scope creep by this point, so erred on the side of caution in preserving existing behaviour. I'm pretty sure it could just be removed though, so I'll open a separate ticket for that. > Always require replace_address to replace existing address > -- > > Key: CASSANDRA-10134 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10134 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Tyler Hobbs >Assignee: Sam Tunnicliffe > Labels: docs-impacting > Fix For: 3.x > > > Normally, when a node is started from a clean state with the same address as > an existing down node, it will fail to start with an error like this: > {noformat} > ERROR [main] 2015-08-19 15:07:51,577 CassandraDaemon.java:554 - Exception > encountered during startup > java.lang.RuntimeException: A node with address /127.0.0.3 already exists, > cancelling join. Use cassandra.replace_address if you want to replace this > node. > at > org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:543) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:783) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:720) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:611) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:378) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:537) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:626) > [main/:na] > {noformat} > However, if {{auto_bootstrap}} is set to false or the node is in its own seed > list, it will not throw this error and will start normally. The new node > then takes over the host ID of the old node (even if the tokens are > different), and the only message you will see is a warning in the other > nodes' logs: > {noformat} > logger.warn("Changing {}'s host ID from {} to {}", endpoint, storedId, > hostId); > {noformat} > This could cause an operator to accidentally wipe out the token information > for a down node without replacing it. To fix this, we should check for an > endpoint collision even if {{auto_bootstrap}} is false or the node is a seed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)