[jira] [Created] (CASSANDRA-5828) add counters coverage to upgrade tests
Cathy Daw created CASSANDRA-5828: Summary: add counters coverage to upgrade tests Key: CASSANDRA-5828 URL: https://issues.apache.org/jira/browse/CASSANDRA-5828 Project: Cassandra Issue Type: Test Components: Tests Reporter: Cathy Daw Assignee: Daniel Meyer Priority: Critical Fix For: 2.0 rc1, 1.2.9 this was encountered as missing coverage when upgrading to 1.2.7 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5823) nodetool history logging
[ https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723467#comment-13723467 ] Jason Brown commented on CASSANDRA-5823: Huh, just discovered the cli assumptions work [~dbrosius] did for CASSANDRA-4052. That seems reasonable to retain, so i think I'll go ahead and keep those files. nodetool history logging Key: CASSANDRA-5823 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Fix For: 1.2.8, 2.0 rc1 Attachments: 5823-v1.patch Capture the commands and time executed from nodetool into a log file, similar to the cli. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5823) nodetool history logging
[ https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723467#comment-13723467 ] Jason Brown edited comment on CASSANDRA-5823 at 7/30/13 6:23 AM: - Huh, just discovered the cli assumptions work [~dbrosius] did for CASSANDRA-4052. That seems reasonable to retain, so i think I'll go ahead and keep those assumption files. was (Author: jasobrown): Huh, just discovered the cli assumptions work [~dbrosius] did for CASSANDRA-4052. That seems reasonable to retain, so i think I'll go ahead and keep those files. nodetool history logging Key: CASSANDRA-5823 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Fix For: 1.2.8, 2.0 rc1 Attachments: 5823-v1.patch Capture the commands and time executed from nodetool into a log file, similar to the cli. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5793) OPP seems completely unsupported
[ https://issues.apache.org/jira/browse/CASSANDRA-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723553#comment-13723553 ] Vara Kumar edited comment on CASSANDRA-5793 at 7/30/13 8:21 AM: Should we handle in this way (return hex value if OPP fails to encode bytes to UTF-8 instead of throwing error) mark OPP as unsupported in relevant documentation? was (Author: varakumar): Should we apply a patch to handle in this way (return hex value if OPP fails to encode bytes to UTF-8 instead of throwing error) mark OPP as unsupported in relevant documentation? OPP seems completely unsupported Key: CASSANDRA-5793 URL: https://issues.apache.org/jira/browse/CASSANDRA-5793 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Environment: Cassandra on Ubuntu Reporter: Vara Kumar Fix For: 1.2.9 We were using 0.7.6 version. And upgraded to 1.2.5 today. We were using OPP (OrderPreservingPartitioner). OPP throws error when any node join the cluster. Cluster can not be brought up due to this error. After digging little deep, We realized that peers column family is defined with key as type inet. Looks like many other column families in system keyspace has same issue. Exception trace: java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:172) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.Table.apply(Table.java:379) at org.apache.cassandra.db.Table.apply(Table.java:353) at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:258) at org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:117) at org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:172) at org.apache.cassandra.db.SystemTable.updatePeerInfo(SystemTable.java:258) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1231) at org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1948) at org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:823) at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:901) at org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:50) Possibilities: - Changing partitioner to BOP (or something else) fails while loading schema_keyspaces. So, it does not look like an option. - One possibility is that getToken of OPP can return hex value if it fails to encode bytes to UTF-8 instead of throwing error. By this system tables seem to be working fine with OPP. - Or Completely remove OPP from code base configuration files. Mark clearly that OPP is no longer supported in upgrade instructions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5793) OPP seems completely unsupported
[ https://issues.apache.org/jira/browse/CASSANDRA-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723553#comment-13723553 ] Vara Kumar commented on CASSANDRA-5793: --- Should we apply a patch to handle in this way (return hex value if OPP fails to encode bytes to UTF-8 instead of throwing error) mark OPP as unsupported in relevant documentation? OPP seems completely unsupported Key: CASSANDRA-5793 URL: https://issues.apache.org/jira/browse/CASSANDRA-5793 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Environment: Cassandra on Ubuntu Reporter: Vara Kumar Fix For: 1.2.9 We were using 0.7.6 version. And upgraded to 1.2.5 today. We were using OPP (OrderPreservingPartitioner). OPP throws error when any node join the cluster. Cluster can not be brought up due to this error. After digging little deep, We realized that peers column family is defined with key as type inet. Looks like many other column families in system keyspace has same issue. Exception trace: java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:172) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.Table.apply(Table.java:379) at org.apache.cassandra.db.Table.apply(Table.java:353) at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:258) at org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:117) at org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:172) at org.apache.cassandra.db.SystemTable.updatePeerInfo(SystemTable.java:258) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1231) at org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1948) at org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:823) at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:901) at org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:50) Possibilities: - Changing partitioner to BOP (or something else) fails while loading schema_keyspaces. So, it does not look like an option. - One possibility is that getToken of OPP can return hex value if it fails to encode bytes to UTF-8 instead of throwing error. By this system tables seem to be working fine with OPP. - Or Completely remove OPP from code base configuration files. Mark clearly that OPP is no longer supported in upgrade instructions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5823) nodetool history logging
[ https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723576#comment-13723576 ] Jason Brown commented on CASSANDRA-5823: Ignore earlier concern (re: noise) over preserving existing history files. Turned out to be less hassle than I thought. Patch coming tomorrow. nodetool history logging Key: CASSANDRA-5823 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Fix For: 1.2.8, 2.0 rc1 Attachments: 5823-v1.patch Capture the commands and time executed from nodetool into a log file, similar to the cli. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5823) nodetool history logging
[ https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-5823: --- Attachment: 5823-v2.patch v2 centralizes the placement of c* history files into ~/.cassandra. This means cqlsh, cli, and nodetool all write to the new directory. Two of the files were renamed slightly for clarity. For cli (history and assumptions) and cqlsh (cqlshrc and history), I updated the code to move any older files into the new ~/.cassandra dir, and cleaned up any leftovers (~/.cassandra-cli). I added a centralized method for getting the history dir location: FBUtilities.getHistoryDirectory(). I couldn't find a better class for that method, but I'm happy to move it elsewhere. I didn't address the ~/.cassandra.in.sh yet (if we want), and the try-with-resources recommendation (in NodeCmd) will be applied to trunk. nodetool history logging Key: CASSANDRA-5823 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Fix For: 1.2.8, 2.0 rc1 Attachments: 5823-v1.patch, 5823-v2.patch Capture the commands and time executed from nodetool into a log file, similar to the cli. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5823) nodetool history logging
[ https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13723873#comment-13723873 ] Jason Brown edited comment on CASSANDRA-5823 at 7/30/13 2:02 PM: - v2 centralizes the placement of c* history files into \~/.cassandra. This means cqlsh, cli, and nodetool all write to the new directory. Two of the files were renamed slightly for clarity. For cli (history and assumptions) and cqlsh (cqlshrc and history), I updated the code to move any older files into the new \~/.cassandra dir, and cleaned up any leftovers (\~/.cassandra-cli). I added a centralized method for getting the history dir location: FBUtilities.getHistoryDirectory(). I couldn't find a better class for that method, but I'm happy to move it elsewhere. I didn't address the \~/.cassandra.in.sh yet (if we want), and the try-with-resources recommendation (in NodeCmd) will be applied to trunk. was (Author: jasobrown): v2 centralizes the placement of c* history files into ~/.cassandra. This means cqlsh, cli, and nodetool all write to the new directory. Two of the files were renamed slightly for clarity. For cli (history and assumptions) and cqlsh (cqlshrc and history), I updated the code to move any older files into the new ~/.cassandra dir, and cleaned up any leftovers (~/.cassandra-cli). I added a centralized method for getting the history dir location: FBUtilities.getHistoryDirectory(). I couldn't find a better class for that method, but I'm happy to move it elsewhere. I didn't address the ~/.cassandra.in.sh yet (if we want), and the try-with-resources recommendation (in NodeCmd) will be applied to trunk. nodetool history logging Key: CASSANDRA-5823 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Fix For: 1.2.8, 2.0 rc1 Attachments: 5823-v1.patch, 5823-v2.patch Capture the commands and time executed from nodetool into a log file, similar to the cli. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[3/3] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/318b00e2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/318b00e2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/318b00e2 Branch: refs/heads/trunk Commit: 318b00e25100b9261fa09e80b7a377963ef2608c Parents: 6b3b62f af1a9fe Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 09:24:15 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 09:24:15 2013 -0500 -- src/java/org/apache/cassandra/hadoop/ConfigHelper.java | 10 ++ 1 file changed, 10 insertions(+) --
[1/3] git commit: add ConfigHelper setters for consistency levels patch by Manoj Mainali; reviewed by jbellis for CASSANDRA-5827
Updated Branches: refs/heads/cassandra-1.2 e7ea389a3 - af1a9fef2 refs/heads/trunk 6b3b62f98 - 318b00e25 add ConfigHelper setters for consistency levels patch by Manoj Mainali; reviewed by jbellis for CASSANDRA-5827 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/af1a9fef Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/af1a9fef Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/af1a9fef Branch: refs/heads/cassandra-1.2 Commit: af1a9fef29be8220973834f89b21bd539efdf055 Parents: e7ea389 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 09:23:56 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 09:23:56 2013 -0500 -- src/java/org/apache/cassandra/hadoop/ConfigHelper.java | 10 ++ 1 file changed, 10 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/af1a9fef/src/java/org/apache/cassandra/hadoop/ConfigHelper.java -- diff --git a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java index 36228cf..a109b2f 100644 --- a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java +++ b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java @@ -385,11 +385,21 @@ public class ConfigHelper return conf.get(READ_CONSISTENCY_LEVEL, ONE); } +public static void setReadConsistencyLevel(Configuration conf, String consistencyLevel) +{ +conf.set(READ_CONSISTENCY_LEVEL, consistencyLevel); +} + public static String getWriteConsistencyLevel(Configuration conf) { return conf.get(WRITE_CONSISTENCY_LEVEL, ONE); } +public static void setWriteConsistencyLevel(Configuration conf, String consistencyLevel) +{ +conf.set(WRITE_CONSISTENCY_LEVEL, consistencyLevel); +} + public static int getInputRpcPort(Configuration conf) { return Integer.parseInt(conf.get(INPUT_THRIFT_PORT, 9160));
[2/3] git commit: add ConfigHelper setters for consistency levels patch by Manoj Mainali; reviewed by jbellis for CASSANDRA-5827
add ConfigHelper setters for consistency levels patch by Manoj Mainali; reviewed by jbellis for CASSANDRA-5827 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/af1a9fef Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/af1a9fef Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/af1a9fef Branch: refs/heads/trunk Commit: af1a9fef29be8220973834f89b21bd539efdf055 Parents: e7ea389 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 09:23:56 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 09:23:56 2013 -0500 -- src/java/org/apache/cassandra/hadoop/ConfigHelper.java | 10 ++ 1 file changed, 10 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/af1a9fef/src/java/org/apache/cassandra/hadoop/ConfigHelper.java -- diff --git a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java index 36228cf..a109b2f 100644 --- a/src/java/org/apache/cassandra/hadoop/ConfigHelper.java +++ b/src/java/org/apache/cassandra/hadoop/ConfigHelper.java @@ -385,11 +385,21 @@ public class ConfigHelper return conf.get(READ_CONSISTENCY_LEVEL, ONE); } +public static void setReadConsistencyLevel(Configuration conf, String consistencyLevel) +{ +conf.set(READ_CONSISTENCY_LEVEL, consistencyLevel); +} + public static String getWriteConsistencyLevel(Configuration conf) { return conf.get(WRITE_CONSISTENCY_LEVEL, ONE); } +public static void setWriteConsistencyLevel(Configuration conf, String consistencyLevel) +{ +conf.set(WRITE_CONSISTENCY_LEVEL, consistencyLevel); +} + public static int getInputRpcPort(Configuration conf) { return Integer.parseInt(conf.get(INPUT_THRIFT_PORT, 9160));
[jira] [Updated] (CASSANDRA-5827) Expose setters for consistency level in Hadoop config helper
[ https://issues.apache.org/jira/browse/CASSANDRA-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5827: -- Component/s: Hadoop Priority: Trivial (was: Minor) Affects Version/s: (was: 1.2.7) Expose setters for consistency level in Hadoop config helper Key: CASSANDRA-5827 URL: https://issues.apache.org/jira/browse/CASSANDRA-5827 Project: Cassandra Issue Type: Bug Components: Hadoop Reporter: Manoj Mainali Assignee: Manoj Mainali Priority: Trivial Attachments: trunk-CASSANDRA-5827.patch ConfigHelper exposes the getters for read and write consistency, which defaults to the consistency level of ONE if one is not defined. However, setters are missing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5820) sstableloader broken in 1.2.7/1.2.8
[ https://issues.apache.org/jira/browse/CASSANDRA-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-5820: -- Attachment: 0001-Add-SSTableLoader-unit-test.patch Looks like this is a regression from CASSANDRA-. (I think workaround is to use sstableloader from 1.2.6). Attached unit test for SSTableLoader. if run with 'ant test-compression -Dtest.name=SSTableLoaderTest', it fails as described above. sstableloader broken in 1.2.7/1.2.8 --- Key: CASSANDRA-5820 URL: https://issues.apache.org/jira/browse/CASSANDRA-5820 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.7, 1.2.8 Reporter: Nick Bailey Attachments: 0001-Add-SSTableLoader-unit-test.patch I don't see this happen on 1.2.6. To reproduce (on a fresh single node cluster): {noformat} [Nicks-MacBook-Pro:11:33:06 (cassandra-1.2.7)*] cassandra$ bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.1.4 | Cassandra 1.2.7-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 19.36.0] cqlsh CREATE KEYSPACE test_backup_restore WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; cqlsh use test_backup_restore; cqlsh:test_backup_restore CREATE TABLE cf0 ( ... a text PRIMARY KEY, ... b text, ... c text ... ); cqlsh:test_backup_restore INSERT INTO cf0 (a, b, c) VALUES ( 'a', 'b', 'c'); cqlsh:test_backup_restore select * from cf0; a | b | c ---+---+--- a | b | c cqlsh:test_backup_restore ^D [Nicks-MacBook-Pro:11:34:22 (cassandra-1.2.7)*] cassandra$ bin/nodetool snapshot Requested creating snapshot for: all keyspaces Snapshot directory: 1375115668449 [Nicks-MacBook-Pro:11:34:40 (cassandra-1.2.7)*] cassandra$ mkdir -p test_backup_restore/snapshots [Nicks-MacBook-Pro:11:34:48 (cassandra-1.2.7)*] cassandra$ cp /var/lib/cassandra/data/test_backup_restore/cf0/snapshots/1375115668449/* test_backup_restore/snapshots/ [Nicks-MacBook-Pro:11:35:14 (cassandra-1.2.7)*] cassandra$ bin/sstableloader --debug -v -d 127.0.0.1 test_backup_restore/snapshots Streaming revelant part of test_backup_restore/snapshots/test_backup_restore-cf0-ic-1-Data.db to [/127.0.0.1] org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile java.lang.ClassCastException: org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile at org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:574) at org.apache.cassandra.streaming.StreamOut.createPendingFiles(StreamOut.java:179) at org.apache.cassandra.streaming.StreamOut.transferSSTables(StreamOut.java:154) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:145) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:67) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5820) sstableloader broken in 1.2.7/1.2.8
[ https://issues.apache.org/jira/browse/CASSANDRA-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-5820: -- Assignee: Tyler Hobbs sstableloader broken in 1.2.7/1.2.8 --- Key: CASSANDRA-5820 URL: https://issues.apache.org/jira/browse/CASSANDRA-5820 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.7, 1.2.8 Reporter: Nick Bailey Assignee: Tyler Hobbs Attachments: 0001-Add-SSTableLoader-unit-test.patch I don't see this happen on 1.2.6. To reproduce (on a fresh single node cluster): {noformat} [Nicks-MacBook-Pro:11:33:06 (cassandra-1.2.7)*] cassandra$ bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.1.4 | Cassandra 1.2.7-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 19.36.0] cqlsh CREATE KEYSPACE test_backup_restore WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; cqlsh use test_backup_restore; cqlsh:test_backup_restore CREATE TABLE cf0 ( ... a text PRIMARY KEY, ... b text, ... c text ... ); cqlsh:test_backup_restore INSERT INTO cf0 (a, b, c) VALUES ( 'a', 'b', 'c'); cqlsh:test_backup_restore select * from cf0; a | b | c ---+---+--- a | b | c cqlsh:test_backup_restore ^D [Nicks-MacBook-Pro:11:34:22 (cassandra-1.2.7)*] cassandra$ bin/nodetool snapshot Requested creating snapshot for: all keyspaces Snapshot directory: 1375115668449 [Nicks-MacBook-Pro:11:34:40 (cassandra-1.2.7)*] cassandra$ mkdir -p test_backup_restore/snapshots [Nicks-MacBook-Pro:11:34:48 (cassandra-1.2.7)*] cassandra$ cp /var/lib/cassandra/data/test_backup_restore/cf0/snapshots/1375115668449/* test_backup_restore/snapshots/ [Nicks-MacBook-Pro:11:35:14 (cassandra-1.2.7)*] cassandra$ bin/sstableloader --debug -v -d 127.0.0.1 test_backup_restore/snapshots Streaming revelant part of test_backup_restore/snapshots/test_backup_restore-cf0-ic-1-Data.db to [/127.0.0.1] org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile java.lang.ClassCastException: org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile at org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:574) at org.apache.cassandra.streaming.StreamOut.createPendingFiles(StreamOut.java:179) at org.apache.cassandra.streaming.StreamOut.transferSSTables(StreamOut.java:154) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:145) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:67) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5829) test issue
Jonathan Ellis created CASSANDRA-5829: - Summary: test issue Key: CASSANDRA-5829 URL: https://issues.apache.org/jira/browse/CASSANDRA-5829 Project: Cassandra Issue Type: Bug Reporter: Jonathan Ellis -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5829) test issue
[ https://issues.apache.org/jira/browse/CASSANDRA-5829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jake Farrell updated CASSANDRA-5829: Reproduced In: 1.2.7 Since Version: 1.2.7 test issue -- Key: CASSANDRA-5829 URL: https://issues.apache.org/jira/browse/CASSANDRA-5829 Project: Cassandra Issue Type: Bug Reporter: Jonathan Ellis -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5727) Evaluate default LCS sstable size
[ https://issues.apache.org/jira/browse/CASSANDRA-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724006#comment-13724006 ] T Jake Luciani commented on CASSANDRA-5727: --- [~danielmeyer] Did you track compaction time across sizes? Evaluate default LCS sstable size - Key: CASSANDRA-5727 URL: https://issues.apache.org/jira/browse/CASSANDRA-5727 Project: Cassandra Issue Type: Task Components: Core Reporter: Jonathan Ellis Assignee: Daniel Meyer Fix For: 1.2.9 Attachments: BytesRead_vs_LCS.png, ReadLatency_vs_LCS.png, Throughtput_vs_LCS.png, UpdateLatency_vs_LCS.png What we're not sure about is the effect on compaction efficiency -- larger files mean that each level contains more data, so reads will have to touch less sstables, but we're also compacting less unchanged data when we merge forward. So the question is, how big can we make the sstables to get the benefits of the first effect, before the second effect starts to dominate? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5727) Evaluate default LCS sstable size
[ https://issues.apache.org/jira/browse/CASSANDRA-5727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724016#comment-13724016 ] Jonathan Ellis commented on CASSANDRA-5727: --- Yes. Time was proportional to the bytes compacted [bytesread graph]. Evaluate default LCS sstable size - Key: CASSANDRA-5727 URL: https://issues.apache.org/jira/browse/CASSANDRA-5727 Project: Cassandra Issue Type: Task Components: Core Reporter: Jonathan Ellis Assignee: Daniel Meyer Fix For: 1.2.9 Attachments: BytesRead_vs_LCS.png, ReadLatency_vs_LCS.png, Throughtput_vs_LCS.png, UpdateLatency_vs_LCS.png What we're not sure about is the effect on compaction efficiency -- larger files mean that each level contains more data, so reads will have to touch less sstables, but we're also compacting less unchanged data when we merge forward. So the question is, how big can we make the sstables to get the benefits of the first effect, before the second effect starts to dominate? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Handle no matching endpoint for hint target
Updated Branches: refs/heads/trunk 318b00e25 - f2be80c61 Handle no matching endpoint for hint target Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f2be80c6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f2be80c6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f2be80c6 Branch: refs/heads/trunk Commit: f2be80c6170f469a42b0f01d91b4ad206b5c1cf3 Parents: 318b00e Author: Tyler Hobbs ty...@datastax.com Authored: Mon Jul 29 13:15:00 2013 -0500 Committer: Yuki Morishita yu...@apache.org Committed: Tue Jul 30 12:46:15 2013 -0500 -- src/java/org/apache/cassandra/db/HintedHandOffManager.java | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f2be80c6/src/java/org/apache/cassandra/db/HintedHandOffManager.java -- diff --git a/src/java/org/apache/cassandra/db/HintedHandOffManager.java b/src/java/org/apache/cassandra/db/HintedHandOffManager.java index 6b69354..014a4cc 100644 --- a/src/java/org/apache/cassandra/db/HintedHandOffManager.java +++ b/src/java/org/apache/cassandra/db/HintedHandOffManager.java @@ -122,7 +122,13 @@ public class HintedHandOffManager implements HintedHandOffManagerMBean public RowMutation hintFor(RowMutation mutation, int ttl, UUID targetId) { assert ttl 0; - metrics.incrCreatedHints(StorageService.instance.getTokenMetadata().getEndpointForHostId(targetId)); + +InetAddress endpoint = StorageService.instance.getTokenMetadata().getEndpointForHostId(targetId); +// during tests we may not have a matching endpoint, but this would be unexpected in real clusters +if (endpoint != null) +metrics.incrCreatedHints(endpoint); +else +logger.warn(Unable to find matching endpoint for target {} when storing a hint, targetId); UUID hintId = UUIDGen.getTimeUUID(); // serialize the hint with id and version as a composite column name
[jira] [Created] (CASSANDRA-5830) Paxos loops endlessly due to faulty condition check
Soumava Ghosh created CASSANDRA-5830: Summary: Paxos loops endlessly due to faulty condition check Key: CASSANDRA-5830 URL: https://issues.apache.org/jira/browse/CASSANDRA-5830 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 beta 2 Reporter: Soumava Ghosh Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PREPARE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5820) sstableloader broken in 1.2.7/1.2.8
[ https://issues.apache.org/jira/browse/CASSANDRA-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-5820: --- Attachment: 0002-Create-CompressedFile-common-interface.patch Patch 0002 creates a common Interface for the two Compressed*File classes to allow access to the compression metadata without casting. sstableloader broken in 1.2.7/1.2.8 --- Key: CASSANDRA-5820 URL: https://issues.apache.org/jira/browse/CASSANDRA-5820 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.7, 1.2.8 Reporter: Nick Bailey Assignee: Tyler Hobbs Attachments: 0001-Add-SSTableLoader-unit-test.patch, 0002-Create-CompressedFile-common-interface.patch I don't see this happen on 1.2.6. To reproduce (on a fresh single node cluster): {noformat} [Nicks-MacBook-Pro:11:33:06 (cassandra-1.2.7)*] cassandra$ bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.1.4 | Cassandra 1.2.7-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 19.36.0] cqlsh CREATE KEYSPACE test_backup_restore WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; cqlsh use test_backup_restore; cqlsh:test_backup_restore CREATE TABLE cf0 ( ... a text PRIMARY KEY, ... b text, ... c text ... ); cqlsh:test_backup_restore INSERT INTO cf0 (a, b, c) VALUES ( 'a', 'b', 'c'); cqlsh:test_backup_restore select * from cf0; a | b | c ---+---+--- a | b | c cqlsh:test_backup_restore ^D [Nicks-MacBook-Pro:11:34:22 (cassandra-1.2.7)*] cassandra$ bin/nodetool snapshot Requested creating snapshot for: all keyspaces Snapshot directory: 1375115668449 [Nicks-MacBook-Pro:11:34:40 (cassandra-1.2.7)*] cassandra$ mkdir -p test_backup_restore/snapshots [Nicks-MacBook-Pro:11:34:48 (cassandra-1.2.7)*] cassandra$ cp /var/lib/cassandra/data/test_backup_restore/cf0/snapshots/1375115668449/* test_backup_restore/snapshots/ [Nicks-MacBook-Pro:11:35:14 (cassandra-1.2.7)*] cassandra$ bin/sstableloader --debug -v -d 127.0.0.1 test_backup_restore/snapshots Streaming revelant part of test_backup_restore/snapshots/test_backup_restore-cf0-ic-1-Data.db to [/127.0.0.1] org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile java.lang.ClassCastException: org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile at org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:574) at org.apache.cassandra.streaming.StreamOut.createPendingFiles(StreamOut.java:179) at org.apache.cassandra.streaming.StreamOut.transferSSTables(StreamOut.java:154) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:145) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:67) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5826) Fix trigger directory detection code
[ https://issues.apache.org/jira/browse/CASSANDRA-5826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724240#comment-13724240 ] Vijay commented on CASSANDRA-5826: -- Probably have to change the build.xml to copy the trigger directory to build like what we do with conf directory? I will add the above and also add it to Debian package may be (in addition adding a property to override the trigger absolute path). Fix trigger directory detection code Key: CASSANDRA-5826 URL: https://issues.apache.org/jira/browse/CASSANDRA-5826 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 2.0 beta 2 Environment: OS X Reporter: Aleksey Yeschenko Assignee: Vijay Labels: triggers At least when building from source, Cassandra determines the trigger directory wrong. C* calculates the trigger directory as 'build/triggers' instead of 'triggers'. FBUtilities.cassandraHomeDir() is to blame, and should be replaced with something more robust. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)
[ https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724263#comment-13724263 ] Benedict commented on CASSANDRA-2698: - Thanks Yuki, sounds like my changes are fine assuming they test okay. I have a bit of a furlow between my various trips now (which I had rather optimistically expected to find time to test this on) so should be able to get a patch over in the next couple of days or so. Instrument repair to be able to assess it's efficiency (precision) -- Key: CASSANDRA-2698 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Benedict Priority: Minor Labels: lhf Attachments: nodetool_repair_and_cfhistogram.tar.gz, patch_2698_v1.txt, patch.diff, patch-rebased.diff, patch.taketwo.alpha.diff Some reports indicate that repair sometime transfer huge amounts of data. One hypothesis is that the merkle tree precision may deteriorate too much at some data size. To check this hypothesis, it would be reasonably to gather statistic during the merkle tree building of how many rows each merkle tree range account for (and the size that this represent). It is probably an interesting statistic to have anyway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/3] git commit: fix quoting in CqlPagingRecordReader and CqlRecordWriter patch by Alex Liu; reviewed by jbellis for CASSANDRA-5824
Updated Branches: refs/heads/cassandra-1.2 af1a9fef2 - aa518998c refs/heads/trunk f2be80c61 - bdc8e2617 fix quoting in CqlPagingRecordReader and CqlRecordWriter patch by Alex Liu; reviewed by jbellis for CASSANDRA-5824 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aa518998 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aa518998 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aa518998 Branch: refs/heads/cassandra-1.2 Commit: aa518998c9114f6cc8de4bb43d5dae7eaa6b06f8 Parents: af1a9fe Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:29:44 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:29:44 2013 -0500 -- CHANGES.txt | 2 ++ .../cassandra/hadoop/cql3/CqlPagingRecordReader.java | 8 +--- .../org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java | 10 -- 3 files changed, 15 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa518998/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 9a348f1..bd85e88 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.2.9 + * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter + (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa518998/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java index a900261..7798ac9 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java @@ -430,8 +430,8 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, columns = withoutKeyColumns(columns); columns = (clusterKey == null || .equals(clusterKey)) -? quote(partitionKey) + , + columns -: quote(partitionKey) + , + quote(clusterKey) + , + columns; +? partitionKey + , + columns +: partitionKey + , + clusterKey + , + columns; } String whereStr = userDefinedWhereClauses == null ? : AND + userDefinedWhereClauses; @@ -590,7 +590,8 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, } /** Quoting for working with uppercase */ -private String quote(String identifier) { +private String quote(String identifier) +{ return \ + identifier.replaceAll(\, \\) + \; } @@ -764,3 +765,4 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, } } } +@ http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa518998/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java index 642d8c4..612f86a 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java @@ -375,10 +375,16 @@ final class CqlRecordWriter extends AbstractColumnFamilyRecordWriterMapString, String keyWhereClause = ; for (String partitionKey : partitionKeyColumns) -keyWhereClause += String.format(%s = ?, keyWhereClause.isEmpty() ? partitionKey : ( AND + partitionKey)); +keyWhereClause += String.format(%s = ?, keyWhereClause.isEmpty() ? quote(partitionKey) : ( AND + quote(partitionKey))); for (String clusterColumn : clusterColumns) -keyWhereClause += AND + clusterColumn + = ?; +keyWhereClause += AND + quote(clusterColumn) + = ?; return cqlQuery + WHERE + keyWhereClause; } + +/** Quoting for working with uppercase */ +private String quote(String identifier) +{ +return \ + identifier.replaceAll(\, \\) + \; +} }
[2/3] git commit: fix quoting in CqlPagingRecordReader and CqlRecordWriter patch by Alex Liu; reviewed by jbellis for CASSANDRA-5824
fix quoting in CqlPagingRecordReader and CqlRecordWriter patch by Alex Liu; reviewed by jbellis for CASSANDRA-5824 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aa518998 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aa518998 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aa518998 Branch: refs/heads/trunk Commit: aa518998c9114f6cc8de4bb43d5dae7eaa6b06f8 Parents: af1a9fe Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:29:44 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:29:44 2013 -0500 -- CHANGES.txt | 2 ++ .../cassandra/hadoop/cql3/CqlPagingRecordReader.java | 8 +--- .../org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java | 10 -- 3 files changed, 15 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa518998/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 9a348f1..bd85e88 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 1.2.9 + * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter + (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa518998/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java index a900261..7798ac9 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java @@ -430,8 +430,8 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, columns = withoutKeyColumns(columns); columns = (clusterKey == null || .equals(clusterKey)) -? quote(partitionKey) + , + columns -: quote(partitionKey) + , + quote(clusterKey) + , + columns; +? partitionKey + , + columns +: partitionKey + , + clusterKey + , + columns; } String whereStr = userDefinedWhereClauses == null ? : AND + userDefinedWhereClauses; @@ -590,7 +590,8 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, } /** Quoting for working with uppercase */ -private String quote(String identifier) { +private String quote(String identifier) +{ return \ + identifier.replaceAll(\, \\) + \; } @@ -764,3 +765,4 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, } } } +@ http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa518998/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java index 642d8c4..612f86a 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java @@ -375,10 +375,16 @@ final class CqlRecordWriter extends AbstractColumnFamilyRecordWriterMapString, String keyWhereClause = ; for (String partitionKey : partitionKeyColumns) -keyWhereClause += String.format(%s = ?, keyWhereClause.isEmpty() ? partitionKey : ( AND + partitionKey)); +keyWhereClause += String.format(%s = ?, keyWhereClause.isEmpty() ? quote(partitionKey) : ( AND + quote(partitionKey))); for (String clusterColumn : clusterColumns) -keyWhereClause += AND + clusterColumn + = ?; +keyWhereClause += AND + quote(clusterColumn) + = ?; return cqlQuery + WHERE + keyWhereClause; } + +/** Quoting for working with uppercase */ +private String quote(String identifier) +{ +return \ + identifier.replaceAll(\, \\) + \; +} }
[3/3] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bdc8e261 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bdc8e261 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bdc8e261 Branch: refs/heads/trunk Commit: bdc8e26171b5000b69299122d5c773e46e4e9749 Parents: f2be80c aa51899 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:32:13 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:32:13 2013 -0500 -- CHANGES.txt | 2 ++ .../cassandra/hadoop/cql3/CqlPagingRecordReader.java | 8 +--- .../org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java | 10 -- 3 files changed, 15 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdc8e261/CHANGES.txt -- diff --cc CHANGES.txt index 4717e91,bd85e88..83310ed --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,13 -1,10 +1,15 @@@ -1.2.9 +2.0.0-rc1 + * fix potential spurious wakeup in AsyncOneResponse (CASSANDRA-5690) + * fix schema-related trigger issues (CASSANDRA-5774) + * Better validation when accessing CQL3 table from thrift (CASSANDRA-5138) + * Fix assertion error during repair (CASSANDRA-5801) + * Fix range tombstone bug (CASSANDRA-5805) + * DC-local CAS (CASSANDRA-5797) + * Add a native_protocol_version column to the system.local table (CASSANRDA-5819) +Merged from 1.2: + * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter +(CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) - - -1.2.8 * Fix reading DeletionTime from 1.1-format sstables (CASSANDRA-5814) * cqlsh: add collections support to COPY (CASSANDRA-5698) * retry important messages for any IOException (CASSANDRA-5804) http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdc8e261/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bdc8e261/src/java/org/apache/cassandra/hadoop/cql3/CqlRecordWriter.java --
[jira] [Updated] (CASSANDRA-5824) Fix quoting in CqlPagingRecordReader and CqlRecordWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5824: -- Reviewer: jbellis Fix Version/s: 1.2.9 Fix quoting in CqlPagingRecordReader and CqlRecordWriter Key: CASSANDRA-5824 URL: https://issues.apache.org/jira/browse/CASSANDRA-5824 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.7 Reporter: Alex Liu Assignee: Alex Liu Fix For: 1.2.9 Attachments: 5824-1.2-branch.txt To support case sensitive in CQL, we need add double quotes to the name of columns and table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5824) Fix quoting in CqlPagingRecordReader and CqlRecordWriter
[ https://issues.apache.org/jira/browse/CASSANDRA-5824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5824: -- Summary: Fix quoting in CqlPagingRecordReader and CqlRecordWriter (was: Support case sensitive in CqlPagingRecordReader and CqlRecordWriter) To clarify: this fixes redundant quoting in CqlPRR of what keyString has already quoted for us, and adds quoting to CqlRW. Fix quoting in CqlPagingRecordReader and CqlRecordWriter Key: CASSANDRA-5824 URL: https://issues.apache.org/jira/browse/CASSANDRA-5824 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.7 Reporter: Alex Liu Assignee: Alex Liu Attachments: 5824-1.2-branch.txt To support case sensitive in CQL, we need add double quotes to the name of columns and table. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[3/3] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/369415c2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/369415c2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/369415c2 Branch: refs/heads/trunk Commit: 369415c262e95cb0ebbb9ea6325f3ce76747fb08 Parents: bdc8e26 e873213 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:33:42 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:33:42 2013 -0500 -- .../org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java | 1 - 1 file changed, 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/369415c2/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java --
[2/3] git commit: r/m stray @
r/m stray @ Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e8732139 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e8732139 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e8732139 Branch: refs/heads/trunk Commit: e8732139d8d32308d327cee3641ed390aca5905c Parents: aa51899 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:33:28 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:33:36 2013 -0500 -- .../org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java | 1 - 1 file changed, 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8732139/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java index 7798ac9..fc07131 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java @@ -765,4 +765,3 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, } } } -@
[1/3] git commit: r/m stray @
Updated Branches: refs/heads/cassandra-1.2 aa518998c - e8732139d refs/heads/trunk bdc8e2617 - 369415c26 r/m stray @ Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e8732139 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e8732139 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e8732139 Branch: refs/heads/cassandra-1.2 Commit: e8732139d8d32308d327cee3641ed390aca5905c Parents: aa51899 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:33:28 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:33:36 2013 -0500 -- .../org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java | 1 - 1 file changed, 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8732139/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java -- diff --git a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java index 7798ac9..fc07131 100644 --- a/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java +++ b/src/java/org/apache/cassandra/hadoop/cql3/CqlPagingRecordReader.java @@ -765,4 +765,3 @@ public class CqlPagingRecordReader extends RecordReaderMapString, ByteBuffer, } } } -@
[jira] [Commented] (CASSANDRA-5752) Thrift tables are not supported from CqlPagingInputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-5752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724311#comment-13724311 ] Jonathan Ellis commented on CASSANDRA-5752: --- But you're checking for key_aliases being null earlier. Which is it going to be? Both checks should not be necessary. Also, I think the {{rows.size()==0}} check is bogus; there should always be an entry in schema_columnfamilies. If there isn't, falling back to describe_columnfamilies isn't going to help. Thrift tables are not supported from CqlPagingInputFormat - Key: CASSANDRA-5752 URL: https://issues.apache.org/jira/browse/CASSANDRA-5752 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.6 Reporter: Jonathan Ellis Assignee: Alex Liu Fix For: 1.2.9 Attachments: 5752-1-1.2-branch.txt, 5752-1.2-branch.txt CqlPagingInputFormat inspects the system schema to generate the WHERE clauses needed to page wide rows, but for a classic Thrift table there are no entries for the default column names of key, column1, column2, ..., value so CPIF breaks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5670) running compact on an index did not compact two index files into one
[ https://issues.apache.org/jira/browse/CASSANDRA-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724309#comment-13724309 ] Jason Brown commented on CASSANDRA-5670: (Getting back to this one now). I worked from the list of tasks that Jonathan provided in the comment linked to above - and, yeah, looks like compact was left out or forgotten. Will work on adding compacting 2is today. running compact on an index did not compact two index files into one Key: CASSANDRA-5670 URL: https://issues.apache.org/jira/browse/CASSANDRA-5670 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.2 Reporter: Cathy Daw Assignee: Jason Brown Priority: Minor Fix For: 1.2.9 With a data directory containing secondary index files ending in -1 and -2, I expected that when I ran compact against the index that they would compact down to a set of -3 files. This column family uses SizeTieredCompactionStrategy. Using our standard CQL example, the compact command used was: $ ./nodetool compact test1 test1-playlists.playlists_artist_idx Please note: reproducing this test on 1.1.12 (using a single primary key), you will see that running compact on the keyspace also does not compact the index file. There is no option to compact the index, so I could not compare that. {noformat} CREATE KEYSPACE test1 WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; use test1; CREATE TABLE playlists ( id uuid, song_order int, song_id uuid, title text, album text, artist text, PRIMARY KEY (id, song_order ) ); INSERT INTO playlists (id, song_order, song_id, title, artist, album) VALUES (62c36092-82a1-3a00-93d1-46196ee77204, 1, a3e64f8f-bd44-4f28-b8d9-6938726e34d4, 'La Grange', 'ZZ Top', 'Tres Hombres'); select * from playlists; = ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt = CREATE INDEX ON playlists(artist ); select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db test1-playlists.playlists_artist_idx-ic-1-Data.db test1-playlists.playlists_artist_idx-ic-1-Filter.db test1-playlists.playlists_artist_idx-ic-1-Index.db test1-playlists.playlists_artist_idx-ic-1-Statistics.db test1-playlists.playlists_artist_idx-ic-1-Summary.db test1-playlists.playlists_artist_idx-ic-1-TOC.txt = delete artist from playlists where id = 62c36092-82a1-3a00-93d1-46196ee77204 and song_order = 1; select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists-ic-2-CompressionInfo.db test1-playlists-ic-2-Data.db test1-playlists-ic-2-Filter.db test1-playlists-ic-2-Index.db test1-playlists-ic-2-Statistics.db test1-playlists-ic-2-Summary.db test1-playlists-ic-2-TOC.txt
[jira] [Updated] (CASSANDRA-5820) sstableloader broken in 1.2.7/1.2.8
[ https://issues.apache.org/jira/browse/CASSANDRA-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5820: -- Reviewer: jbellis Component/s: Tools Affects Version/s: (was: 1.2.8) Fix Version/s: 1.2.9 sstableloader broken in 1.2.7/1.2.8 --- Key: CASSANDRA-5820 URL: https://issues.apache.org/jira/browse/CASSANDRA-5820 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.7 Reporter: Nick Bailey Assignee: Tyler Hobbs Fix For: 1.2.9 Attachments: 0001-Add-SSTableLoader-unit-test.patch, 0002-Create-CompressedFile-common-interface.patch I don't see this happen on 1.2.6. To reproduce (on a fresh single node cluster): {noformat} [Nicks-MacBook-Pro:11:33:06 (cassandra-1.2.7)*] cassandra$ bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.1.4 | Cassandra 1.2.7-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 19.36.0] cqlsh CREATE KEYSPACE test_backup_restore WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; cqlsh use test_backup_restore; cqlsh:test_backup_restore CREATE TABLE cf0 ( ... a text PRIMARY KEY, ... b text, ... c text ... ); cqlsh:test_backup_restore INSERT INTO cf0 (a, b, c) VALUES ( 'a', 'b', 'c'); cqlsh:test_backup_restore select * from cf0; a | b | c ---+---+--- a | b | c cqlsh:test_backup_restore ^D [Nicks-MacBook-Pro:11:34:22 (cassandra-1.2.7)*] cassandra$ bin/nodetool snapshot Requested creating snapshot for: all keyspaces Snapshot directory: 1375115668449 [Nicks-MacBook-Pro:11:34:40 (cassandra-1.2.7)*] cassandra$ mkdir -p test_backup_restore/snapshots [Nicks-MacBook-Pro:11:34:48 (cassandra-1.2.7)*] cassandra$ cp /var/lib/cassandra/data/test_backup_restore/cf0/snapshots/1375115668449/* test_backup_restore/snapshots/ [Nicks-MacBook-Pro:11:35:14 (cassandra-1.2.7)*] cassandra$ bin/sstableloader --debug -v -d 127.0.0.1 test_backup_restore/snapshots Streaming revelant part of test_backup_restore/snapshots/test_backup_restore-cf0-ic-1-Data.db to [/127.0.0.1] org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile java.lang.ClassCastException: org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile at org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:574) at org.apache.cassandra.streaming.StreamOut.createPendingFiles(StreamOut.java:179) at org.apache.cassandra.streaming.StreamOut.transferSSTables(StreamOut.java:154) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:145) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:67) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5670) running compact on an index did not compact two index files into one
[ https://issues.apache.org/jira/browse/CASSANDRA-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-5670: --- Attachment: 5670-v1.diff v1 simply enables compaction on 2Is (changes a false argument to true) running compact on an index did not compact two index files into one Key: CASSANDRA-5670 URL: https://issues.apache.org/jira/browse/CASSANDRA-5670 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.2 Reporter: Cathy Daw Assignee: Jason Brown Priority: Minor Fix For: 1.2.9 Attachments: 5670-v1.diff With a data directory containing secondary index files ending in -1 and -2, I expected that when I ran compact against the index that they would compact down to a set of -3 files. This column family uses SizeTieredCompactionStrategy. Using our standard CQL example, the compact command used was: $ ./nodetool compact test1 test1-playlists.playlists_artist_idx Please note: reproducing this test on 1.1.12 (using a single primary key), you will see that running compact on the keyspace also does not compact the index file. There is no option to compact the index, so I could not compare that. {noformat} CREATE KEYSPACE test1 WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; use test1; CREATE TABLE playlists ( id uuid, song_order int, song_id uuid, title text, album text, artist text, PRIMARY KEY (id, song_order ) ); INSERT INTO playlists (id, song_order, song_id, title, artist, album) VALUES (62c36092-82a1-3a00-93d1-46196ee77204, 1, a3e64f8f-bd44-4f28-b8d9-6938726e34d4, 'La Grange', 'ZZ Top', 'Tres Hombres'); select * from playlists; = ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt = CREATE INDEX ON playlists(artist ); select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db test1-playlists.playlists_artist_idx-ic-1-Data.db test1-playlists.playlists_artist_idx-ic-1-Filter.db test1-playlists.playlists_artist_idx-ic-1-Index.db test1-playlists.playlists_artist_idx-ic-1-Statistics.db test1-playlists.playlists_artist_idx-ic-1-Summary.db test1-playlists.playlists_artist_idx-ic-1-TOC.txt = delete artist from playlists where id = 62c36092-82a1-3a00-93d1-46196ee77204 and song_order = 1; select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists-ic-2-CompressionInfo.db test1-playlists-ic-2-Data.db test1-playlists-ic-2-Filter.db test1-playlists-ic-2-Index.db test1-playlists-ic-2-Statistics.db test1-playlists-ic-2-Summary.db test1-playlists-ic-2-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db test1-playlists.playlists_artist_idx-ic-1-Data.db
[jira] [Created] (CASSANDRA-5831) Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff
Jeremiah Jordan created CASSANDRA-5831: -- Summary: Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff Key: CASSANDRA-5831 URL: https://issues.apache.org/jira/browse/CASSANDRA-5831 Project: Cassandra Issue Type: Bug Reporter: Jeremiah Jordan If you try to upgrade from C* 1.0.X to 1.2.X and run offline sstableupgrade to try and migrate the sstables before starting 1.2.X for the first time, it messes up the system folder, because it doesn't migrate it right, and then C* 1.2 can't start. sstableupgrade should either refuse to run against a C* 1.0 data folder, or migrate stuff the right way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/3] git commit: fix bulk-loading compressed sstables patch by Tyler Hobbs and yukim; reviewed by jbellis for CASSANDRA-5820
Updated Branches: refs/heads/cassandra-1.2 e8732139d - ab8a28e36 refs/heads/trunk 369415c26 - 1d6bed3ba fix bulk-loading compressed sstables patch by Tyler Hobbs and yukim; reviewed by jbellis for CASSANDRA-5820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab8a28e3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab8a28e3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab8a28e3 Branch: refs/heads/cassandra-1.2 Commit: ab8a28e365da8ef515a40e838e773d67ad92a282 Parents: e873213 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:51:07 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:54:08 2013 -0500 -- CHANGES.txt | 1 + .../cassandra/io/sstable/SSTableReader.java | 2 +- .../io/sstable/SSTableSimpleUnsortedWriter.java | 11 ++- .../io/util/CompressedPoolingSegmentedFile.java | 7 +- .../io/util/CompressedSegmentedFile.java| 9 +- .../cassandra/io/sstable/SSTableLoaderTest.java | 98 6 files changed, 120 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bd85e88..8578855 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2.9 + * fix bulk-loading compressed sstables (CASSANDRA-5820) * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/src/java/org/apache/cassandra/io/sstable/SSTableReader.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java index 7f52bcf..e9a03c8 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java @@ -571,7 +571,7 @@ public class SSTableReader extends SSTable if (!compression) throw new IllegalStateException(this + is not compressed); -return ((CompressedPoolingSegmentedFile)dfile).metadata; +return ((ICompressedFile) dfile).getMetadata(); } /** http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java index 9207276..48770d3 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java @@ -77,9 +77,7 @@ public class SSTableSimpleUnsortedWriter extends AbstractSSTableSimpleWriter int bufferSizeInMB, CompressionParameters compressParameters) { -super(directory, new CFMetaData(keyspace, columnFamily, subComparator == null ? ColumnFamilyType.Standard : ColumnFamilyType.Super, comparator, subComparator).compressionParameters(compressParameters), partitioner); -this.bufferSize = bufferSizeInMB * 1024L * 1024L; -this.diskWriter.start(); +this(directory, new CFMetaData(keyspace, columnFamily, subComparator == null ? ColumnFamilyType.Standard : ColumnFamilyType.Super, comparator, subComparator).compressionParameters(compressParameters), partitioner, bufferSizeInMB); } public SSTableSimpleUnsortedWriter(File directory, @@ -93,6 +91,13 @@ public class SSTableSimpleUnsortedWriter extends AbstractSSTableSimpleWriter this(directory, partitioner, keyspace, columnFamily, comparator, subComparator, bufferSizeInMB, new CompressionParameters(null)); } +public SSTableSimpleUnsortedWriter(File directory, CFMetaData metadata, IPartitioner partitioner, long bufferSizeInMB) +{ +super(directory, metadata, partitioner); +this.bufferSize = bufferSizeInMB * 1024L * 1024L; +this.diskWriter.start(); +} + protected void writeRow(DecoratedKey key, ColumnFamily columnFamily) throws IOException { currentSize += key.key.remaining() + ColumnFamily.serializer.serializedSize(columnFamily, MessagingService.current_version) * 1.2; http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/src/java/org/apache/cassandra/io/util/CompressedPoolingSegmentedFile.java
[2/3] git commit: fix bulk-loading compressed sstables patch by Tyler Hobbs and yukim; reviewed by jbellis for CASSANDRA-5820
fix bulk-loading compressed sstables patch by Tyler Hobbs and yukim; reviewed by jbellis for CASSANDRA-5820 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ab8a28e3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ab8a28e3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ab8a28e3 Branch: refs/heads/trunk Commit: ab8a28e365da8ef515a40e838e773d67ad92a282 Parents: e873213 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 14:51:07 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 14:54:08 2013 -0500 -- CHANGES.txt | 1 + .../cassandra/io/sstable/SSTableReader.java | 2 +- .../io/sstable/SSTableSimpleUnsortedWriter.java | 11 ++- .../io/util/CompressedPoolingSegmentedFile.java | 7 +- .../io/util/CompressedSegmentedFile.java| 9 +- .../cassandra/io/sstable/SSTableLoaderTest.java | 98 6 files changed, 120 insertions(+), 8 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index bd85e88..8578855 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 1.2.9 + * fix bulk-loading compressed sstables (CASSANDRA-5820) * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/src/java/org/apache/cassandra/io/sstable/SSTableReader.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java index 7f52bcf..e9a03c8 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java @@ -571,7 +571,7 @@ public class SSTableReader extends SSTable if (!compression) throw new IllegalStateException(this + is not compressed); -return ((CompressedPoolingSegmentedFile)dfile).metadata; +return ((ICompressedFile) dfile).getMetadata(); } /** http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java index 9207276..48770d3 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java @@ -77,9 +77,7 @@ public class SSTableSimpleUnsortedWriter extends AbstractSSTableSimpleWriter int bufferSizeInMB, CompressionParameters compressParameters) { -super(directory, new CFMetaData(keyspace, columnFamily, subComparator == null ? ColumnFamilyType.Standard : ColumnFamilyType.Super, comparator, subComparator).compressionParameters(compressParameters), partitioner); -this.bufferSize = bufferSizeInMB * 1024L * 1024L; -this.diskWriter.start(); +this(directory, new CFMetaData(keyspace, columnFamily, subComparator == null ? ColumnFamilyType.Standard : ColumnFamilyType.Super, comparator, subComparator).compressionParameters(compressParameters), partitioner, bufferSizeInMB); } public SSTableSimpleUnsortedWriter(File directory, @@ -93,6 +91,13 @@ public class SSTableSimpleUnsortedWriter extends AbstractSSTableSimpleWriter this(directory, partitioner, keyspace, columnFamily, comparator, subComparator, bufferSizeInMB, new CompressionParameters(null)); } +public SSTableSimpleUnsortedWriter(File directory, CFMetaData metadata, IPartitioner partitioner, long bufferSizeInMB) +{ +super(directory, metadata, partitioner); +this.bufferSize = bufferSizeInMB * 1024L * 1024L; +this.diskWriter.start(); +} + protected void writeRow(DecoratedKey key, ColumnFamily columnFamily) throws IOException { currentSize += key.key.remaining() + ColumnFamily.serializer.serializedSize(columnFamily, MessagingService.current_version) * 1.2; http://git-wip-us.apache.org/repos/asf/cassandra/blob/ab8a28e3/src/java/org/apache/cassandra/io/util/CompressedPoolingSegmentedFile.java -- diff --git a/src/java/org/apache/cassandra/io/util/CompressedPoolingSegmentedFile.java
[3/3] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1d6bed3b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1d6bed3b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1d6bed3b Branch: refs/heads/trunk Commit: 1d6bed3ba8b19fe1c618155a5d7806b9bb4c6c4e Parents: 369415c ab8a28e Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:08:18 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:08:18 2013 -0500 -- CHANGES.txt | 1 + .../cassandra/io/sstable/SSTableReader.java | 2 +- .../io/sstable/SSTableSimpleUnsortedWriter.java | 11 ++- .../io/util/CompressedPoolingSegmentedFile.java | 9 +- .../io/util/CompressedSegmentedFile.java| 7 +- .../cassandra/service/StorageService.java | 2 - .../cassandra/io/sstable/SSTableLoaderTest.java | 89 7 files changed, 112 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d6bed3b/CHANGES.txt -- diff --cc CHANGES.txt index 83310ed,8578855..a9cb47c --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,12 -1,5 +1,13 @@@ -1.2.9 +2.0.0-rc1 + * fix potential spurious wakeup in AsyncOneResponse (CASSANDRA-5690) + * fix schema-related trigger issues (CASSANDRA-5774) + * Better validation when accessing CQL3 table from thrift (CASSANDRA-5138) + * Fix assertion error during repair (CASSANDRA-5801) + * Fix range tombstone bug (CASSANDRA-5805) + * DC-local CAS (CASSANDRA-5797) + * Add a native_protocol_version column to the system.local table (CASSANRDA-5819) +Merged from 1.2: + * fix bulk-loading compressed sstables (CASSANDRA-5820) * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d6bed3b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d6bed3b/src/java/org/apache/cassandra/io/sstable/SSTableSimpleUnsortedWriter.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d6bed3b/src/java/org/apache/cassandra/io/util/CompressedPoolingSegmentedFile.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d6bed3b/src/java/org/apache/cassandra/service/StorageService.java -- diff --cc src/java/org/apache/cassandra/service/StorageService.java index bec8c8d,26c4d1c..d0581fb --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@@ -3526,7 -3822,7 +3526,6 @@@ public class StorageService extends Not SSTableLoader.Client client = new SSTableLoader.Client() { --@Override public void init(String keyspace) { try @@@ -3545,10 -3841,10 +3544,9 @@@ } } --@Override -public boolean validateColumnFamily(String keyspace, String cfName) +public CFMetaData getCFMetaData(String keyspace, String cfName) { -return Schema.instance.getCFMetaData(keyspace, cfName) != null; +return Schema.instance.getCFMetaData(keyspace, cfName); } }; http://git-wip-us.apache.org/repos/asf/cassandra/blob/1d6bed3b/test/unit/org/apache/cassandra/io/sstable/SSTableLoaderTest.java -- diff --cc test/unit/org/apache/cassandra/io/sstable/SSTableLoaderTest.java index 000,8fa886e..236ee2d mode 00,100644..100644 --- a/test/unit/org/apache/cassandra/io/sstable/SSTableLoaderTest.java +++ b/test/unit/org/apache/cassandra/io/sstable/SSTableLoaderTest.java @@@ -1,0 -1,98 +1,89 @@@ + /* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is
[jira] [Commented] (CASSANDRA-5820) sstableloader broken in 1.2.7/1.2.8
[ https://issues.apache.org/jira/browse/CASSANDRA-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724342#comment-13724342 ] Jonathan Ellis commented on CASSANDRA-5820: --- committed sstableloader broken in 1.2.7/1.2.8 --- Key: CASSANDRA-5820 URL: https://issues.apache.org/jira/browse/CASSANDRA-5820 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.7 Reporter: Nick Bailey Assignee: Tyler Hobbs Fix For: 1.2.9 Attachments: 0001-Add-SSTableLoader-unit-test.patch, 0002-Create-CompressedFile-common-interface.patch I don't see this happen on 1.2.6. To reproduce (on a fresh single node cluster): {noformat} [Nicks-MacBook-Pro:11:33:06 (cassandra-1.2.7)*] cassandra$ bin/cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.1.4 | Cassandra 1.2.7-SNAPSHOT | CQL spec 3.0.0 | Thrift protocol 19.36.0] cqlsh CREATE KEYSPACE test_backup_restore WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; cqlsh use test_backup_restore; cqlsh:test_backup_restore CREATE TABLE cf0 ( ... a text PRIMARY KEY, ... b text, ... c text ... ); cqlsh:test_backup_restore INSERT INTO cf0 (a, b, c) VALUES ( 'a', 'b', 'c'); cqlsh:test_backup_restore select * from cf0; a | b | c ---+---+--- a | b | c cqlsh:test_backup_restore ^D [Nicks-MacBook-Pro:11:34:22 (cassandra-1.2.7)*] cassandra$ bin/nodetool snapshot Requested creating snapshot for: all keyspaces Snapshot directory: 1375115668449 [Nicks-MacBook-Pro:11:34:40 (cassandra-1.2.7)*] cassandra$ mkdir -p test_backup_restore/snapshots [Nicks-MacBook-Pro:11:34:48 (cassandra-1.2.7)*] cassandra$ cp /var/lib/cassandra/data/test_backup_restore/cf0/snapshots/1375115668449/* test_backup_restore/snapshots/ [Nicks-MacBook-Pro:11:35:14 (cassandra-1.2.7)*] cassandra$ bin/sstableloader --debug -v -d 127.0.0.1 test_backup_restore/snapshots Streaming revelant part of test_backup_restore/snapshots/test_backup_restore-cf0-ic-1-Data.db to [/127.0.0.1] org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile java.lang.ClassCastException: org.apache.cassandra.io.util.CompressedSegmentedFile cannot be cast to org.apache.cassandra.io.util.CompressedPoolingSegmentedFile at org.apache.cassandra.io.sstable.SSTableReader.getCompressionMetadata(SSTableReader.java:574) at org.apache.cassandra.streaming.StreamOut.createPendingFiles(StreamOut.java:179) at org.apache.cassandra.streaming.StreamOut.transferSSTables(StreamOut.java:154) at org.apache.cassandra.io.sstable.SSTableLoader.stream(SSTableLoader.java:145) at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:67) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table
[ https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724344#comment-13724344 ] Jonathan Ellis commented on CASSANDRA-5715: --- Delete is not yet supported. CAS on 'primary key only' table --- Key: CASSANDRA-5715 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 beta 2 Attachments: 0001-Conditions-on-INSERT.txt, 0002-Support-updating-the-PK-only.txt, 5715-v2.txt Given a table with only a primary key, like {noformat} CREATE TABLE test (k int PRIMARY KEY) {noformat} there is currently no way to CAS a row in that table into existing because: # INSERT doesn't currently support IF # UPDATE has no way to update such table So we should probably allow IF conditions on INSERT statements. In addition (or alternatively), we could work on allowing UPDATE to update such table. One motivation for that could be to make UPDATE always be more general to INSERT. That is currently, there is a bunch of operation that INSERT cannot do (counter increments, collection appends), but that primary key table case is, afaik, the only case where you *need* to use INSERT. However, because CQL forces segregation of PK value to the WHERE clause and not to the SET one, the only syntax that I can see work would be: {noformat} UPDATE WHERE k=0; {noformat} which maybe is too ugly to allow? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5761) Issue with secondary index sstable.
[ https://issues.apache.org/jira/browse/CASSANDRA-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724343#comment-13724343 ] Jonathan Ellis commented on CASSANDRA-5761: --- bq. We usually used 28 chars key - first part of the key is event_id and the second is generated string numbers - but the length was 28. The problem started when for some event_id length has changed so the key length became 38. When we found who changed the event_id length and fixed that the problem has gone. But what if in the future we will change the length again? Sounds to me like your new keys were overlapping with your old ones unexpectedly. If you use a composite key, you won't have to worry about this happening. Issue with secondary index sstable. --- Key: CASSANDRA-5761 URL: https://issues.apache.org/jira/browse/CASSANDRA-5761 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Reporter: Andriy Yevsyukov Priority: Critical With Cassandra 1.2.5 having issue very similar to [CASSANDRA-5225|https://issues.apache.org/jira/browse/CASSANDRA-5225] but for secondary index sstable. Every query that uses this index fails in Hector with ConnectionTimeout but cassandra log says that reason is: {noformat} ERROR [ReadStage:55803] 2013-07-15 12:11:35,392 CassandraDaemon.java (line 175) Exception in thread Thread[ReadStage:55803,5,main] java.lang.RuntimeException: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/data/cassandra/data/betting/events/betting-events.events_sport_type_idx-ic-1-Data.db, 19658 bytes remaining) at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1582) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: org.apache.cassandra.io.sstable.CorruptSSTableException: org.apache.cassandra.db.ColumnSerializer$CorruptColumnException: invalid column name length 0 (/data/cassandra/data/betting/events/betting-events.events_sport_type_idx-ic-1-Data.db, 19658 bytes remaining) at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:108) at org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:39) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:90) at org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:171) at org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:154) at org.apache.cassandra.utils.MergeIterator$Candidate.advance(MergeIterator.java:143) at org.apache.cassandra.utils.MergeIterator$ManyToOne.init(MergeIterator.java:86) at org.apache.cassandra.utils.MergeIterator.get(MergeIterator.java:45) at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:134) at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:84) at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:293) at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:65) at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1357) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1214) at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1126) at org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:140) at org.apache.cassandra.db.index.keys.KeysSearcher$1.computeNext(KeysSearcher.java:109) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1466) at org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:82) at org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:548) at org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1454) at
[jira] [Commented] (CASSANDRA-5670) running compact on an index did not compact two index files into one
[ https://issues.apache.org/jira/browse/CASSANDRA-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724346#comment-13724346 ] Jonathan Ellis commented on CASSANDRA-5670: --- +1 running compact on an index did not compact two index files into one Key: CASSANDRA-5670 URL: https://issues.apache.org/jira/browse/CASSANDRA-5670 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.2 Reporter: Cathy Daw Assignee: Jason Brown Priority: Minor Labels: nodetool, secondary_index Fix For: 2.0 rc1, 1.2.9 Attachments: 5670-v1.diff With a data directory containing secondary index files ending in -1 and -2, I expected that when I ran compact against the index that they would compact down to a set of -3 files. This column family uses SizeTieredCompactionStrategy. Using our standard CQL example, the compact command used was: $ ./nodetool compact test1 test1-playlists.playlists_artist_idx Please note: reproducing this test on 1.1.12 (using a single primary key), you will see that running compact on the keyspace also does not compact the index file. There is no option to compact the index, so I could not compare that. {noformat} CREATE KEYSPACE test1 WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; use test1; CREATE TABLE playlists ( id uuid, song_order int, song_id uuid, title text, album text, artist text, PRIMARY KEY (id, song_order ) ); INSERT INTO playlists (id, song_order, song_id, title, artist, album) VALUES (62c36092-82a1-3a00-93d1-46196ee77204, 1, a3e64f8f-bd44-4f28-b8d9-6938726e34d4, 'La Grange', 'ZZ Top', 'Tres Hombres'); select * from playlists; = ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt = CREATE INDEX ON playlists(artist ); select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db test1-playlists.playlists_artist_idx-ic-1-Data.db test1-playlists.playlists_artist_idx-ic-1-Filter.db test1-playlists.playlists_artist_idx-ic-1-Index.db test1-playlists.playlists_artist_idx-ic-1-Statistics.db test1-playlists.playlists_artist_idx-ic-1-Summary.db test1-playlists.playlists_artist_idx-ic-1-TOC.txt = delete artist from playlists where id = 62c36092-82a1-3a00-93d1-46196ee77204 and song_order = 1; select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists-ic-2-CompressionInfo.db test1-playlists-ic-2-Data.db test1-playlists-ic-2-Filter.db test1-playlists-ic-2-Index.db test1-playlists-ic-2-Statistics.db test1-playlists-ic-2-Summary.db test1-playlists-ic-2-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db
[2/3] git commit: add ICompressedFile
add ICompressedFile Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97bc9c7e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97bc9c7e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97bc9c7e Branch: refs/heads/trunk Commit: 97bc9c7e22c168b198e7d05f841f550105553f89 Parents: ab8a28e Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:16:15 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:16:15 2013 -0500 -- .../cassandra/io/util/ICompressedFile.java | 25 1 file changed, 25 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/97bc9c7e/src/java/org/apache/cassandra/io/util/ICompressedFile.java -- diff --git a/src/java/org/apache/cassandra/io/util/ICompressedFile.java b/src/java/org/apache/cassandra/io/util/ICompressedFile.java new file mode 100644 index 000..3ca7718 --- /dev/null +++ b/src/java/org/apache/cassandra/io/util/ICompressedFile.java @@ -0,0 +1,25 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.io.util; + +import org.apache.cassandra.io.compress.CompressionMetadata; + +public interface ICompressedFile +{ +public CompressionMetadata getMetadata(); +}
[3/3] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef29c82e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef29c82e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef29c82e Branch: refs/heads/trunk Commit: ef29c82e8026b0eb70121559ac138a5bc86fd53b Parents: 1d6bed3 97bc9c7 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:16:23 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:16:23 2013 -0500 -- .../cassandra/io/util/ICompressedFile.java | 25 1 file changed, 25 insertions(+) --
[1/3] git commit: add ICompressedFile
Updated Branches: refs/heads/cassandra-1.2 ab8a28e36 - 97bc9c7e2 refs/heads/trunk 1d6bed3ba - ef29c82e8 add ICompressedFile Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/97bc9c7e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/97bc9c7e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/97bc9c7e Branch: refs/heads/cassandra-1.2 Commit: 97bc9c7e22c168b198e7d05f841f550105553f89 Parents: ab8a28e Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:16:15 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:16:15 2013 -0500 -- .../cassandra/io/util/ICompressedFile.java | 25 1 file changed, 25 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/97bc9c7e/src/java/org/apache/cassandra/io/util/ICompressedFile.java -- diff --git a/src/java/org/apache/cassandra/io/util/ICompressedFile.java b/src/java/org/apache/cassandra/io/util/ICompressedFile.java new file mode 100644 index 000..3ca7718 --- /dev/null +++ b/src/java/org/apache/cassandra/io/util/ICompressedFile.java @@ -0,0 +1,25 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.io.util; + +import org.apache.cassandra.io.compress.CompressionMetadata; + +public interface ICompressedFile +{ +public CompressionMetadata getMetadata(); +}
[jira] [Updated] (CASSANDRA-5831) Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff
[ https://issues.apache.org/jira/browse/CASSANDRA-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5831: -- Component/s: Tools Priority: Minor (was: Major) Fix Version/s: 1.2.9 Assignee: Tyler Hobbs I think all we need to do here is don't run upgradesstables if the ks/cf/ heirarchy doesn't exist already for the system tables. In particular, upgradesstables against a 1.1 install should be fine. Running sstableupgrade on C* 1.0 data dir, before starting C* 1.2 for the first time breaks stuff - Key: CASSANDRA-5831 URL: https://issues.apache.org/jira/browse/CASSANDRA-5831 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Jeremiah Jordan Assignee: Tyler Hobbs Priority: Minor Fix For: 1.2.9 If you try to upgrade from C* 1.0.X to 1.2.X and run offline sstableupgrade to try and migrate the sstables before starting 1.2.X for the first time, it messes up the system folder, because it doesn't migrate it right, and then C* 1.2 can't start. sstableupgrade should either refuse to run against a C* 1.0 data folder, or migrate stuff the right way. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5670) running compact on an index did not compact two index files into one
[ https://issues.apache.org/jira/browse/CASSANDRA-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-5670: --- Reviewer: jbellis (was: brandon.williams) running compact on an index did not compact two index files into one Key: CASSANDRA-5670 URL: https://issues.apache.org/jira/browse/CASSANDRA-5670 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.2 Reporter: Cathy Daw Assignee: Jason Brown Priority: Minor Labels: nodetool, secondary_index Fix For: 2.0 rc1, 1.2.9 Attachments: 5670-v1.diff With a data directory containing secondary index files ending in -1 and -2, I expected that when I ran compact against the index that they would compact down to a set of -3 files. This column family uses SizeTieredCompactionStrategy. Using our standard CQL example, the compact command used was: $ ./nodetool compact test1 test1-playlists.playlists_artist_idx Please note: reproducing this test on 1.1.12 (using a single primary key), you will see that running compact on the keyspace also does not compact the index file. There is no option to compact the index, so I could not compare that. {noformat} CREATE KEYSPACE test1 WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; use test1; CREATE TABLE playlists ( id uuid, song_order int, song_id uuid, title text, album text, artist text, PRIMARY KEY (id, song_order ) ); INSERT INTO playlists (id, song_order, song_id, title, artist, album) VALUES (62c36092-82a1-3a00-93d1-46196ee77204, 1, a3e64f8f-bd44-4f28-b8d9-6938726e34d4, 'La Grange', 'ZZ Top', 'Tres Hombres'); select * from playlists; = ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt = CREATE INDEX ON playlists(artist ); select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db test1-playlists.playlists_artist_idx-ic-1-Data.db test1-playlists.playlists_artist_idx-ic-1-Filter.db test1-playlists.playlists_artist_idx-ic-1-Index.db test1-playlists.playlists_artist_idx-ic-1-Statistics.db test1-playlists.playlists_artist_idx-ic-1-Summary.db test1-playlists.playlists_artist_idx-ic-1-TOC.txt = delete artist from playlists where id = 62c36092-82a1-3a00-93d1-46196ee77204 and song_order = 1; select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists-ic-2-CompressionInfo.db test1-playlists-ic-2-Data.db test1-playlists-ic-2-Filter.db test1-playlists-ic-2-Index.db test1-playlists-ic-2-Statistics.db test1-playlists-ic-2-Summary.db test1-playlists-ic-2-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db test1-playlists.playlists_artist_idx-ic-1-Data.db
git commit: Allow compacting 2Is via nodetool patch by jasobrown; reviewed by jbellis for CASSANDRA-5670
Updated Branches: refs/heads/cassandra-1.2 97bc9c7e2 - 94d7cb411 Allow compacting 2Is via nodetool patch by jasobrown; reviewed by jbellis for CASSANDRA-5670 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94d7cb41 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94d7cb41 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94d7cb41 Branch: refs/heads/cassandra-1.2 Commit: 94d7cb411b21c1a8a4d7c3d375dfd09f4dc3f885 Parents: 97bc9c7 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Jul 30 12:52:03 2013 -0700 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Jul 30 13:35:15 2013 -0700 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/StorageService.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/94d7cb41/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8578855..1497299 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,7 @@ * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) + * Allow compacting 2Is via nodetool (CASSANDRA-5670) 1.2.8 http://git-wip-us.apache.org/repos/asf/cassandra/blob/94d7cb41/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index 26c4d1c..54f8abd 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -2165,7 +2165,7 @@ public class StorageService extends NotificationBroadcasterSupport implements IE public void forceTableCompaction(String tableName, String... columnFamilies) throws IOException, ExecutionException, InterruptedException { -for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, tableName, columnFamilies)) +for (ColumnFamilyStore cfStore : getValidColumnFamilies(true, false, tableName, columnFamilies)) { cfStore.forceMajorCompaction(); }
[jira] [Assigned] (CASSANDRA-5084) Cassandra should expose connected client state via JMX
[ https://issues.apache.org/jira/browse/CASSANDRA-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-5084: - Assignee: Suresh Cassandra should expose connected client state via JMX -- Key: CASSANDRA-5084 URL: https://issues.apache.org/jira/browse/CASSANDRA-5084 Project: Cassandra Issue Type: Improvement Reporter: Robert Coli Assignee: Suresh Priority: Minor Labels: lhf Attachments: 5084-trunk.patch There is currently no good way to determine or estimate how many clients are connected to a cassandra node without using netstat or (if using sync thrift server) counting threads. There is also no way to understand what state any given connection is in. People regularly come into #cassandra/cassandra-user@ and ask how to get the equivalent of a MySQL SHOW FULL PROCESSLIST. While I understand that feature parity with SHOW FULL PROCESSLIST/information_schema.processlist is unlikely, even a few basic metrics like number of connected clients or number of active clients would greatly help with this operational information need. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5793) OPP seems completely unsupported
[ https://issues.apache.org/jira/browse/CASSANDRA-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5793: -- Priority: Minor (was: Major) OPP seems completely unsupported Key: CASSANDRA-5793 URL: https://issues.apache.org/jira/browse/CASSANDRA-5793 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Environment: Cassandra on Ubuntu Reporter: Vara Kumar Priority: Minor Fix For: 1.2.9 We were using 0.7.6 version. And upgraded to 1.2.5 today. We were using OPP (OrderPreservingPartitioner). OPP throws error when any node join the cluster. Cluster can not be brought up due to this error. After digging little deep, We realized that peers column family is defined with key as type inet. Looks like many other column families in system keyspace has same issue. Exception trace: java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:172) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.Table.apply(Table.java:379) at org.apache.cassandra.db.Table.apply(Table.java:353) at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:258) at org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:117) at org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:172) at org.apache.cassandra.db.SystemTable.updatePeerInfo(SystemTable.java:258) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1231) at org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1948) at org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:823) at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:901) at org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:50) Possibilities: - Changing partitioner to BOP (or something else) fails while loading schema_keyspaces. So, it does not look like an option. - One possibility is that getToken of OPP can return hex value if it fails to encode bytes to UTF-8 instead of throwing error. By this system tables seem to be working fine with OPP. - Or Completely remove OPP from code base configuration files. Mark clearly that OPP is no longer supported in upgrade instructions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/2] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb981e62 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb981e62 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb981e62 Branch: refs/heads/trunk Commit: cb981e6242c7397619797dda5dea4db6396f47a9 Parents: ef29c82 2ccbe3c Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:41:01 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:41:01 2013 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb981e62/CHANGES.txt -- diff --cc CHANGES.txt index a9cb47c,c20c0fe..30b70a7 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -11,6 -3,10 +11,7 @@@ Merged from 1.2 * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) + * Hex-encode non-String keys in OPP (CASSANDRA-5793) - - -1.2.8 * Fix reading DeletionTime from 1.1-format sstables (CASSANDRA-5814) * cqlsh: add collections support to COPY (CASSANDRA-5698) * retry important messages for any IOException (CASSANDRA-5804) http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb981e62/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java --
[1/2] git commit: Hex-encode non-String keys in OPP patch by Vara Kumar and jbellis for CASSANDRA-5793
Updated Branches: refs/heads/trunk ef29c82e8 - cb981e624 Hex-encode non-String keys in OPP patch by Vara Kumar and jbellis for CASSANDRA-5793 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ccbe3c6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ccbe3c6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ccbe3c6 Branch: refs/heads/trunk Commit: 2ccbe3c6f547511e79454b79cbef682ef8a6973a Parents: 97bc9c7 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:38:57 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:38:57 2013 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ccbe3c6/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8578855..c20c0fe 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,7 @@ * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) + * Hex-encode non-String keys in OPP (CASSANDRA-5793) 1.2.8 http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ccbe3c6/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java b/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java index 9445ab0..3384713 100644 --- a/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java +++ b/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java @@ -169,7 +169,7 @@ public class OrderPreservingPartitioner extends AbstractPartitionerStringToken } catch (CharacterCodingException e) { -throw new RuntimeException(The provided key was not UTF8 encoded., e); +skey = ByteBufferUtil.bytesToHex(key); } return new StringToken(skey); }
[jira] [Resolved] (CASSANDRA-5793) OPP seems completely unsupported
[ https://issues.apache.org/jira/browse/CASSANDRA-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-5793. --- Resolution: Fixed Reviewer: jbellis Made the change to OPP in 2ccbe3c6f547511e79454b79cbef682ef8a6973a. cassandra.yaml has noted that OPP is deprecated in favor of BOP for... years. OPP seems completely unsupported Key: CASSANDRA-5793 URL: https://issues.apache.org/jira/browse/CASSANDRA-5793 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Environment: Cassandra on Ubuntu Reporter: Vara Kumar Priority: Minor Fix For: 1.2.9 We were using 0.7.6 version. And upgraded to 1.2.5 today. We were using OPP (OrderPreservingPartitioner). OPP throws error when any node join the cluster. Cluster can not be brought up due to this error. After digging little deep, We realized that peers column family is defined with key as type inet. Looks like many other column families in system keyspace has same issue. Exception trace: java.lang.RuntimeException: The provided key was not UTF8 encoded. at org.apache.cassandra.dht.OrderPreservingPartitioner.getToken(OrderPreservingPartitioner.java:172) at org.apache.cassandra.dht.OrderPreservingPartitioner.decorateKey(OrderPreservingPartitioner.java:44) at org.apache.cassandra.db.Table.apply(Table.java:379) at org.apache.cassandra.db.Table.apply(Table.java:353) at org.apache.cassandra.db.RowMutation.apply(RowMutation.java:258) at org.apache.cassandra.cql3.statements.ModificationStatement.executeInternal(ModificationStatement.java:117) at org.apache.cassandra.cql3.QueryProcessor.processInternal(QueryProcessor.java:172) at org.apache.cassandra.db.SystemTable.updatePeerInfo(SystemTable.java:258) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:1231) at org.apache.cassandra.service.StorageService.onJoin(StorageService.java:1948) at org.apache.cassandra.gms.Gossiper.handleMajorStateChange(Gossiper.java:823) at org.apache.cassandra.gms.Gossiper.applyStateLocally(Gossiper.java:901) at org.apache.cassandra.gms.GossipDigestAck2VerbHandler.doVerb(GossipDigestAck2VerbHandler.java:50) Possibilities: - Changing partitioner to BOP (or something else) fails while loading schema_keyspaces. So, it does not look like an option. - One possibility is that getToken of OPP can return hex value if it fails to encode bytes to UTF-8 instead of throwing error. By this system tables seem to be working fine with OPP. - Or Completely remove OPP from code base configuration files. Mark clearly that OPP is no longer supported in upgrade instructions. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/4] git commit: Hex-encode non-String keys in OPP patch by Vara Kumar and jbellis for CASSANDRA-5793
Hex-encode non-String keys in OPP patch by Vara Kumar and jbellis for CASSANDRA-5793 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d735cfdc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d735cfdc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d735cfdc Branch: refs/heads/trunk Commit: d735cfdcc9aab8c196035672d69dca0183ee45d3 Parents: 94d7cb4 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:38:57 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:41:49 2013 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d735cfdc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1497299..a809bc6 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -4,6 +4,7 @@ (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) * Allow compacting 2Is via nodetool (CASSANDRA-5670) + * Hex-encode non-String keys in OPP (CASSANDRA-5793) 1.2.8 http://git-wip-us.apache.org/repos/asf/cassandra/blob/d735cfdc/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java b/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java index 9445ab0..3384713 100644 --- a/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java +++ b/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java @@ -169,7 +169,7 @@ public class OrderPreservingPartitioner extends AbstractPartitionerStringToken } catch (CharacterCodingException e) { -throw new RuntimeException(The provided key was not UTF8 encoded., e); +skey = ByteBufferUtil.bytesToHex(key); } return new StringToken(skey); }
[3/4] git commit: Hex-encode non-String keys in OPP patch by Vara Kumar and jbellis for CASSANDRA-5793
Hex-encode non-String keys in OPP patch by Vara Kumar and jbellis for CASSANDRA-5793 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d735cfdc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d735cfdc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d735cfdc Branch: refs/heads/cassandra-1.2 Commit: d735cfdcc9aab8c196035672d69dca0183ee45d3 Parents: 94d7cb4 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:38:57 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:41:49 2013 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d735cfdc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1497299..a809bc6 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -4,6 +4,7 @@ (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) * Allow compacting 2Is via nodetool (CASSANDRA-5670) + * Hex-encode non-String keys in OPP (CASSANDRA-5793) 1.2.8 http://git-wip-us.apache.org/repos/asf/cassandra/blob/d735cfdc/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java -- diff --git a/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java b/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java index 9445ab0..3384713 100644 --- a/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java +++ b/src/java/org/apache/cassandra/dht/OrderPreservingPartitioner.java @@ -169,7 +169,7 @@ public class OrderPreservingPartitioner extends AbstractPartitionerStringToken } catch (CharacterCodingException e) { -throw new RuntimeException(The provided key was not UTF8 encoded., e); +skey = ByteBufferUtil.bytesToHex(key); } return new StringToken(skey); }
[1/4] git commit: Allow compacting 2Is via nodetool patch by jasobrown; reviewed by jbellis for CASSANDRA-5670
Updated Branches: refs/heads/cassandra-1.2 94d7cb411 - d735cfdcc refs/heads/trunk cb981e624 - 181f3736c Allow compacting 2Is via nodetool patch by jasobrown; reviewed by jbellis for CASSANDRA-5670 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/94d7cb41 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/94d7cb41 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/94d7cb41 Branch: refs/heads/trunk Commit: 94d7cb411b21c1a8a4d7c3d375dfd09f4dc3f885 Parents: 97bc9c7 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Jul 30 12:52:03 2013 -0700 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Jul 30 13:35:15 2013 -0700 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/StorageService.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/94d7cb41/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8578855..1497299 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,7 @@ * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) + * Allow compacting 2Is via nodetool (CASSANDRA-5670) 1.2.8 http://git-wip-us.apache.org/repos/asf/cassandra/blob/94d7cb41/src/java/org/apache/cassandra/service/StorageService.java -- diff --git a/src/java/org/apache/cassandra/service/StorageService.java b/src/java/org/apache/cassandra/service/StorageService.java index 26c4d1c..54f8abd 100644 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@ -2165,7 +2165,7 @@ public class StorageService extends NotificationBroadcasterSupport implements IE public void forceTableCompaction(String tableName, String... columnFamilies) throws IOException, ExecutionException, InterruptedException { -for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, tableName, columnFamilies)) +for (ColumnFamilyStore cfStore : getValidColumnFamilies(true, false, tableName, columnFamilies)) { cfStore.forceMajorCompaction(); }
[4/4] git commit: merge from 1.2
merge from 1.2 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/181f3736 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/181f3736 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/181f3736 Branch: refs/heads/trunk Commit: 181f3736c4d04448be9336107454b6d569daf845 Parents: cb981e6 d735cfd Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:42:55 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:42:55 2013 -0500 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/service/StorageService.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/181f3736/CHANGES.txt -- diff --cc CHANGES.txt index 30b70a7,a809bc6..ce9490c --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -11,7 -3,11 +11,8 @@@ Merged from 1.2 * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter (CASSANDRA-5824) * update default LCS sstable size to 160MB (CASSANDRA-5727) + * Allow compacting 2Is via nodetool (CASSANDRA-5670) * Hex-encode non-String keys in OPP (CASSANDRA-5793) - - -1.2.8 * Fix reading DeletionTime from 1.1-format sstables (CASSANDRA-5814) * cqlsh: add collections support to COPY (CASSANDRA-5698) * retry important messages for any IOException (CASSANDRA-5804) http://git-wip-us.apache.org/repos/asf/cassandra/blob/181f3736/src/java/org/apache/cassandra/service/StorageService.java -- diff --cc src/java/org/apache/cassandra/service/StorageService.java index d0581fb,54f8abd..58a0634 --- a/src/java/org/apache/cassandra/service/StorageService.java +++ b/src/java/org/apache/cassandra/service/StorageService.java @@@ -2091,9 -2163,9 +2091,9 @@@ public class StorageService extends Not cfStore.sstablesRewrite(excludeCurrentVersion); } -public void forceTableCompaction(String tableName, String... columnFamilies) throws IOException, ExecutionException, InterruptedException +public void forceKeyspaceCompaction(String keyspaceName, String... columnFamilies) throws IOException, ExecutionException, InterruptedException { - for (ColumnFamilyStore cfStore : getValidColumnFamilies(false, false, keyspaceName, columnFamilies)) -for (ColumnFamilyStore cfStore : getValidColumnFamilies(true, false, tableName, columnFamilies)) ++for (ColumnFamilyStore cfStore : getValidColumnFamilies(true, false, keyspaceName, columnFamilies)) { cfStore.forceMajorCompaction(); }
[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table
[ https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724409#comment-13724409 ] Blair Zajac commented on CASSANDRA-5715: This ticket is marked as resolved. Open a new one to track DELETE? CAS on 'primary key only' table --- Key: CASSANDRA-5715 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 beta 2 Attachments: 0001-Conditions-on-INSERT.txt, 0002-Support-updating-the-PK-only.txt, 5715-v2.txt Given a table with only a primary key, like {noformat} CREATE TABLE test (k int PRIMARY KEY) {noformat} there is currently no way to CAS a row in that table into existing because: # INSERT doesn't currently support IF # UPDATE has no way to update such table So we should probably allow IF conditions on INSERT statements. In addition (or alternatively), we could work on allowing UPDATE to update such table. One motivation for that could be to make UPDATE always be more general to INSERT. That is currently, there is a bunch of operation that INSERT cannot do (counter increments, collection appends), but that primary key table case is, afaik, the only case where you *need* to use INSERT. However, because CQL forces segregation of PK value to the WHERE clause and not to the SET one, the only syntax that I can see work would be: {noformat} UPDATE WHERE k=0; {noformat} which maybe is too ugly to allow? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: changes.txt
Updated Branches: refs/heads/trunk 181f3736c - 968f6471a changes.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/968f6471 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/968f6471 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/968f6471 Branch: refs/heads/trunk Commit: 968f6471abb69ec3e2d46f66a1a21eb41602872b Parents: 181f373 Author: Jason Brown jasedbr...@gmail.com Authored: Tue Jul 30 13:51:14 2013 -0700 Committer: Jason Brown jasedbr...@gmail.com Committed: Tue Jul 30 13:51:14 2013 -0700 -- CHANGES.txt | 4 1 file changed, 4 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/968f6471/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index ce9490c..15223e2 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -13,6 +13,10 @@ Merged from 1.2: * update default LCS sstable size to 160MB (CASSANDRA-5727) * Allow compacting 2Is via nodetool (CASSANDRA-5670) * Hex-encode non-String keys in OPP (CASSANDRA-5793) + + +1.2.8 + Allow compacting 2Is via nodetool * Fix reading DeletionTime from 1.1-format sstables (CASSANDRA-5814) * cqlsh: add collections support to COPY (CASSANDRA-5698) * retry important messages for any IOException (CASSANDRA-5804)
[jira] [Commented] (CASSANDRA-5670) running compact on an index did not compact two index files into one
[ https://issues.apache.org/jira/browse/CASSANDRA-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724410#comment-13724410 ] Jason Brown commented on CASSANDRA-5670: commmitted to 1.2 and trunk running compact on an index did not compact two index files into one Key: CASSANDRA-5670 URL: https://issues.apache.org/jira/browse/CASSANDRA-5670 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.2 Reporter: Cathy Daw Assignee: Jason Brown Priority: Minor Labels: nodetool, secondary_index Fix For: 2.0 rc1, 1.2.9 Attachments: 5670-v1.diff With a data directory containing secondary index files ending in -1 and -2, I expected that when I ran compact against the index that they would compact down to a set of -3 files. This column family uses SizeTieredCompactionStrategy. Using our standard CQL example, the compact command used was: $ ./nodetool compact test1 test1-playlists.playlists_artist_idx Please note: reproducing this test on 1.1.12 (using a single primary key), you will see that running compact on the keyspace also does not compact the index file. There is no option to compact the index, so I could not compare that. {noformat} CREATE KEYSPACE test1 WITH replication = {'class':'SimpleStrategy', 'replication_factor':1}; use test1; CREATE TABLE playlists ( id uuid, song_order int, song_id uuid, title text, album text, artist text, PRIMARY KEY (id, song_order ) ); INSERT INTO playlists (id, song_order, song_id, title, artist, album) VALUES (62c36092-82a1-3a00-93d1-46196ee77204, 1, a3e64f8f-bd44-4f28-b8d9-6938726e34d4, 'La Grange', 'ZZ Top', 'Tres Hombres'); select * from playlists; = ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt = CREATE INDEX ON playlists(artist ); select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db test1-playlists.playlists_artist_idx-ic-1-Data.db test1-playlists.playlists_artist_idx-ic-1-Filter.db test1-playlists.playlists_artist_idx-ic-1-Index.db test1-playlists.playlists_artist_idx-ic-1-Statistics.db test1-playlists.playlists_artist_idx-ic-1-Summary.db test1-playlists.playlists_artist_idx-ic-1-TOC.txt = delete artist from playlists where id = 62c36092-82a1-3a00-93d1-46196ee77204 and song_order = 1; select * from playlists; select * from playlists where artist = 'ZZ Top'; = $ ./nodetool flush test1 $ ls /var/lib/cassandra/data/test1/playlists test1-playlists-ic-1-CompressionInfo.db test1-playlists-ic-1-Data.db test1-playlists-ic-1-Filter.db test1-playlists-ic-1-Index.db test1-playlists-ic-1-Statistics.db test1-playlists-ic-1-Summary.db test1-playlists-ic-1-TOC.txt test1-playlists-ic-2-CompressionInfo.db test1-playlists-ic-2-Data.db test1-playlists-ic-2-Filter.db test1-playlists-ic-2-Index.db test1-playlists-ic-2-Statistics.db test1-playlists-ic-2-Summary.db test1-playlists-ic-2-TOC.txt test1-playlists.playlists_artist_idx-ic-1-CompressionInfo.db
[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table
[ https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724411#comment-13724411 ] Jonathan Ellis commented on CASSANDRA-5715: --- Go for it. CAS on 'primary key only' table --- Key: CASSANDRA-5715 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 beta 2 Attachments: 0001-Conditions-on-INSERT.txt, 0002-Support-updating-the-PK-only.txt, 5715-v2.txt Given a table with only a primary key, like {noformat} CREATE TABLE test (k int PRIMARY KEY) {noformat} there is currently no way to CAS a row in that table into existing because: # INSERT doesn't currently support IF # UPDATE has no way to update such table So we should probably allow IF conditions on INSERT statements. In addition (or alternatively), we could work on allowing UPDATE to update such table. One motivation for that could be to make UPDATE always be more general to INSERT. That is currently, there is a bunch of operation that INSERT cannot do (counter increments, collection appends), but that primary key table case is, afaik, the only case where you *need* to use INSERT. However, because CQL forces segregation of PK value to the WHERE clause and not to the SET one, the only syntax that I can see work would be: {noformat} UPDATE WHERE k=0; {noformat} which maybe is too ugly to allow? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (CASSANDRA-5816) [PATCH] Debian packaging: also recommend chrony and ptpd in addition to ntp
[ https://issues.apache.org/jira/browse/CASSANDRA-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reopened CASSANDRA-5816: --- [PATCH] Debian packaging: also recommend chrony and ptpd in addition to ntp --- Key: CASSANDRA-5816 URL: https://issues.apache.org/jira/browse/CASSANDRA-5816 Project: Cassandra Issue Type: Improvement Components: Packaging Affects Versions: 1.2.7 Reporter: Blair Zajac Assignee: Blair Zajac Priority: Minor I'm switching my Ubuntu servers running Cassandra from ntp to chrony for the reasons cited here when Fedora made the switch to have chrony be the default NTP client: http://fedoraproject.org/wiki/Features/ChronyDefaultNTP Currently, the debian packaging recommends only ntp so if chrony is installed it'll want to remove it and install ntp. I also added ptpd, the Precision Time Protocol daemon, which is another time syncing server for completeness. Please apply this to the 1.2 branch so the next 1.2.x release can deploy with chrony. Below is the patch since it's a one-liner. Thanks, Blair --- a/debian/control +++ b/debian/control @@ -12,7 +12,7 @@ Standards-Version: 3.8.3 Package: cassandra Architecture: all Depends: openjdk-6-jre-headless (= 6b11) | java6-runtime, jsvc (= 1.0), libcommons-daemon-java (= 1.0), adduser, libjna-java, python (= 2.5), python-support (= 0.90.0), ${misc:Depends} -Recommends: ntp +Recommends: chrony | ntp | ptpd Conflicts: apache-cassandra1 Replaces: apache-cassandra1 Description: distributed storage system for structured data -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: add time-daemon as an alternative to ntp in Debian Recommends patch by Blair Zajac and Paul Cannon for CASSANDRA-5816
Updated Branches: refs/heads/cassandra-1.2 d735cfdcc - 1a4942583 add time-daemon as an alternative to ntp in Debian Recommends patch by Blair Zajac and Paul Cannon for CASSANDRA-5816 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a494258 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a494258 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a494258 Branch: refs/heads/cassandra-1.2 Commit: 1a494258353b638afa8dd27bb6a2e2b0f8c63bb1 Parents: d735cfd Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:57:45 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:57:45 2013 -0500 -- debian/control | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a494258/debian/control -- diff --git a/debian/control b/debian/control index aa3db32..b3cc391 100644 --- a/debian/control +++ b/debian/control @@ -12,7 +12,7 @@ Standards-Version: 3.8.3 Package: cassandra Architecture: all Depends: openjdk-6-jre-headless (= 6b11) | java6-runtime, jsvc (= 1.0), libcommons-daemon-java (= 1.0), adduser, libjna-java, python (= 2.5), python-support (= 0.90.0), ${misc:Depends} -Recommends: ntp +Recommends: ntp | time-daemon Conflicts: apache-cassandra1 Replaces: apache-cassandra1 Description: distributed storage system for structured data
[1/2] git commit: add time-daemon as an alternative to ntp in Debian Recommends patch by Blair Zajac and Paul Cannon for CASSANDRA-5816
Updated Branches: refs/heads/trunk 968f6471a - 8c795372c add time-daemon as an alternative to ntp in Debian Recommends patch by Blair Zajac and Paul Cannon for CASSANDRA-5816 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/1a494258 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/1a494258 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/1a494258 Branch: refs/heads/trunk Commit: 1a494258353b638afa8dd27bb6a2e2b0f8c63bb1 Parents: d735cfd Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 15:57:45 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 15:57:45 2013 -0500 -- debian/control | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/1a494258/debian/control -- diff --git a/debian/control b/debian/control index aa3db32..b3cc391 100644 --- a/debian/control +++ b/debian/control @@ -12,7 +12,7 @@ Standards-Version: 3.8.3 Package: cassandra Architecture: all Depends: openjdk-6-jre-headless (= 6b11) | java6-runtime, jsvc (= 1.0), libcommons-daemon-java (= 1.0), adduser, libjna-java, python (= 2.5), python-support (= 0.90.0), ${misc:Depends} -Recommends: ntp +Recommends: ntp | time-daemon Conflicts: apache-cassandra1 Replaces: apache-cassandra1 Description: distributed storage system for structured data
[2/2] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8c795372 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8c795372 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8c795372 Branch: refs/heads/trunk Commit: 8c795372c929b97b7c5a5fda5e4de8bcc0810415 Parents: 968f647 1a49425 Author: Jonathan Ellis jbel...@apache.org Authored: Tue Jul 30 16:04:00 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Tue Jul 30 16:04:00 2013 -0500 -- --
[jira] [Resolved] (CASSANDRA-5816) [PATCH] Debian packaging: also recommend chrony and ptpd in addition to ntp
[ https://issues.apache.org/jira/browse/CASSANDRA-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-5816. --- Resolution: Fixed Fix Version/s: 1.2.9 2.0 rc1 Brandon committed to trunk yesterday; I added to 1.2 as well. [PATCH] Debian packaging: also recommend chrony and ptpd in addition to ntp --- Key: CASSANDRA-5816 URL: https://issues.apache.org/jira/browse/CASSANDRA-5816 Project: Cassandra Issue Type: Improvement Components: Packaging Affects Versions: 1.2.7 Reporter: Blair Zajac Assignee: Blair Zajac Priority: Minor Fix For: 2.0 rc1, 1.2.9 I'm switching my Ubuntu servers running Cassandra from ntp to chrony for the reasons cited here when Fedora made the switch to have chrony be the default NTP client: http://fedoraproject.org/wiki/Features/ChronyDefaultNTP Currently, the debian packaging recommends only ntp so if chrony is installed it'll want to remove it and install ntp. I also added ptpd, the Precision Time Protocol daemon, which is another time syncing server for completeness. Please apply this to the 1.2 branch so the next 1.2.x release can deploy with chrony. Below is the patch since it's a one-liner. Thanks, Blair --- a/debian/control +++ b/debian/control @@ -12,7 +12,7 @@ Standards-Version: 3.8.3 Package: cassandra Architecture: all Depends: openjdk-6-jre-headless (= 6b11) | java6-runtime, jsvc (= 1.0), libcommons-daemon-java (= 1.0), adduser, libjna-java, python (= 2.5), python-support (= 0.90.0), ${misc:Depends} -Recommends: ntp +Recommends: chrony | ntp | ptpd Conflicts: apache-cassandra1 Replaces: apache-cassandra1 Description: distributed storage system for structured data -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5816) [PATCH] Debian packaging: also recommend chrony and ptpd in addition to ntp
[ https://issues.apache.org/jira/browse/CASSANDRA-5816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724436#comment-13724436 ] Blair Zajac commented on CASSANDRA-5816: Thank you guys, I greatly appreciate it! [PATCH] Debian packaging: also recommend chrony and ptpd in addition to ntp --- Key: CASSANDRA-5816 URL: https://issues.apache.org/jira/browse/CASSANDRA-5816 Project: Cassandra Issue Type: Improvement Components: Packaging Affects Versions: 1.2.7 Reporter: Blair Zajac Assignee: Blair Zajac Priority: Minor Fix For: 2.0 rc1, 1.2.9 I'm switching my Ubuntu servers running Cassandra from ntp to chrony for the reasons cited here when Fedora made the switch to have chrony be the default NTP client: http://fedoraproject.org/wiki/Features/ChronyDefaultNTP Currently, the debian packaging recommends only ntp so if chrony is installed it'll want to remove it and install ntp. I also added ptpd, the Precision Time Protocol daemon, which is another time syncing server for completeness. Please apply this to the 1.2 branch so the next 1.2.x release can deploy with chrony. Below is the patch since it's a one-liner. Thanks, Blair --- a/debian/control +++ b/debian/control @@ -12,7 +12,7 @@ Standards-Version: 3.8.3 Package: cassandra Architecture: all Depends: openjdk-6-jre-headless (= 6b11) | java6-runtime, jsvc (= 1.0), libcommons-daemon-java (= 1.0), adduser, libjna-java, python (= 2.5), python-support (= 0.90.0), ${misc:Depends} -Recommends: ntp +Recommends: chrony | ntp | ptpd Conflicts: apache-cassandra1 Replaces: apache-cassandra1 Description: distributed storage system for structured data -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5715) CAS on 'primary key only' table
[ https://issues.apache.org/jira/browse/CASSANDRA-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724435#comment-13724435 ] Blair Zajac commented on CASSANDRA-5715: Thanks. https://issues.apache.org/jira/browse/CASSANDRA-5832 CAS on 'primary key only' table --- Key: CASSANDRA-5715 URL: https://issues.apache.org/jira/browse/CASSANDRA-5715 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 beta 2 Attachments: 0001-Conditions-on-INSERT.txt, 0002-Support-updating-the-PK-only.txt, 5715-v2.txt Given a table with only a primary key, like {noformat} CREATE TABLE test (k int PRIMARY KEY) {noformat} there is currently no way to CAS a row in that table into existing because: # INSERT doesn't currently support IF # UPDATE has no way to update such table So we should probably allow IF conditions on INSERT statements. In addition (or alternatively), we could work on allowing UPDATE to update such table. One motivation for that could be to make UPDATE always be more general to INSERT. That is currently, there is a bunch of operation that INSERT cannot do (counter increments, collection appends), but that primary key table case is, afaik, the only case where you *need* to use INSERT. However, because CQL forces segregation of PK value to the WHERE clause and not to the SET one, the only syntax that I can see work would be: {noformat} UPDATE WHERE k=0; {noformat} which maybe is too ugly to allow? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5832) DELETE CAS on 'primary key only' table
Blair Zajac created CASSANDRA-5832: -- Summary: DELETE CAS on 'primary key only' table Key: CASSANDRA-5832 URL: https://issues.apache.org/jira/browse/CASSANDRA-5832 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 2.0 beta 2 Reporter: Blair Zajac Priority: Minor Following up on the CAS on 'primary key only' table issue [1] which added support for atomically creating a primary key only table, this ticket is requesting support for a CAS DELETE of a row from such a table. Currently these two different statements fail: Trying DELETE FROM test1 WHERE k = 456 IF EXISTS using cassandra-dbapi2: cql.apivalues.ProgrammingError: Bad Request: line 0:-1 no viable alternative at input 'EOF' Trying DELETE FROM test1 WHERE k = 456 IF k = 456 cql.apivalues.ProgrammingError: Bad Request: PRIMARY KEY part k found in SET part [1] https://issues.apache.org/jira/browse/CASSANDRA-5715 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5830) Paxos loops endlessly due to faulty condition check
[ https://issues.apache.org/jira/browse/CASSANDRA-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Soumava Ghosh updated CASSANDRA-5830: - Description: Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PREPARE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. was: Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PREPARE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. Paxos loops endlessly due to faulty condition check --- Key: CASSANDRA-5830 URL: https://issues.apache.org/jira/browse/CASSANDRA-5830 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 beta 2 Reporter: Soumava Ghosh Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PREPARE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5830) Paxos loops endlessly due to faulty condition check
[ https://issues.apache.org/jira/browse/CASSANDRA-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Soumava Ghosh updated CASSANDRA-5830: - Description: Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. was: Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PREPARE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. Paxos loops endlessly due to faulty condition check --- Key: CASSANDRA-5830 URL: https://issues.apache.org/jira/browse/CASSANDRA-5830 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 beta 2 Reporter: Soumava Ghosh Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Deleted] (CASSANDRA-5664) Improve serialization in the native protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5664: -- Comment: was deleted (was: Can you review [~danielnorberg]?) Improve serialization in the native protocol Key: CASSANDRA-5664 URL: https://issues.apache.org/jira/browse/CASSANDRA-5664 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 Attachments: 0001-Rewrite-encoding-methods.txt, 0002-Avoid-copy-when-compressing-native-protocol-frames.txt Message serialization in the native protocol currently make a Netty's ChannelBuffers.wrappedBuffer(). The rational was to avoid copying of the values bytes when such value are biggish. This has a cost however, especially with lots of small values, and as suggested in CASSANDRA-5422, this might well be a more common scenario for Cassandra, so let's consider directly serializing in a newly allocated buffer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5792) Buffer Underflow during streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724460#comment-13724460 ] Jonathan Ellis commented on CASSANDRA-5792: --- +1 Buffer Underflow during streaming - Key: CASSANDRA-5792 URL: https://issues.apache.org/jira/browse/CASSANDRA-5792 Project: Cassandra Issue Type: Bug Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 2.0 Attachments: 5792.txt {noformat} ERROR [STREAM-IN-/127.0.0.3] 2013-07-22 16:19:50,597 StreamSession.java (line 414) Streaming error occurred java.nio.BufferUnderflowException at java.nio.Buffer.nextGetIndex(Buffer.java:492) at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:135) at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:52) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:288) at java.lang.Thread.run(Thread.java:722) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5830) Paxos loops endlessly due to faulty condition check
[ https://issues.apache.org/jira/browse/CASSANDRA-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Soumava Ghosh updated CASSANDRA-5830: - Description: Following is the code segment (StorageProxy.java:361) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. was: Following is the code segment (StorageProxy.java:328) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. Paxos loops endlessly due to faulty condition check --- Key: CASSANDRA-5830 URL: https://issues.apache.org/jira/browse/CASSANDRA-5830 Project: Cassandra Issue Type: Bug Affects Versions: 2.0 beta 2 Reporter: Soumava Ghosh Following is the code segment (StorageProxy.java:361) which causes the issue: Start is the start time of the paxos, is always less than the current system time, and therefore the negative difference is always less than the timeout. {code:title=StorageProxy.java|borderStyle=solid} private static UUID beginAndRepairPaxos(long start, ByteBuffer key, CFMetaData metadata, ListInetAddress liveEndpoints, int requiredParticipants, ConsistencyLevel consistencyForPaxos) throws WriteTimeoutException { long timeout = TimeUnit.MILLISECONDS.toNanos(DatabaseDescriptor.getCasContentionTimeout()); PrepareCallback summary = null; while (start - System.nanoTime() timeout) { long ballotMillis = summary == null ? System.currentTimeMillis() : Math.max(System.currentTimeMillis(), 1 + UUIDGen.unixTimestamp(summary.inProgressCommit.ballot)); UUID ballot = UUIDGen.getTimeUUID(ballotMillis); {code} Here, the paxos gets stuck when PREPARE returns 'true' but with inProgressCommit. The code in StorageProxy.java:beginAndRepairPaxos() then tries to issue a PROPOSE and COMMIT for the inProgressCommit, and if it repeatedly receives 'false' as a PREPARE_RESPONSE it gets stuck in an endless loop until PREPARE_RESPONSE is true. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Fix buffer underflow on socket close
Updated Branches: refs/heads/trunk 8c795372c - 22bf2c40e Fix buffer underflow on socket close patch by yukim; reviewed by jbellis for CASSANDRA-5792 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/22bf2c40 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/22bf2c40 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/22bf2c40 Branch: refs/heads/trunk Commit: 22bf2c40e46ddf11e1ae0faf0db9e58578c662c4 Parents: 8c79537 Author: Yuki Morishita yu...@apache.org Authored: Tue Jul 30 17:58:59 2013 -0500 Committer: Yuki Morishita yu...@apache.org Committed: Tue Jul 30 17:58:59 2013 -0500 -- CHANGES.txt | 7 ++- .../cassandra/streaming/messages/StreamMessage.java | 16 2 files changed, 14 insertions(+), 9 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/22bf2c40/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 15223e2..0b74b05 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -6,6 +6,8 @@ * Fix range tombstone bug (CASSANDRA-5805) * DC-local CAS (CASSANDRA-5797) * Add a native_protocol_version column to the system.local table (CASSANRDA-5819) + * Use index_interval from cassandra.yaml when upgraded (CASSANDRA-5822) + * Fix buffer underflow on socket close (CASSANDRA-5792) Merged from 1.2: * fix bulk-loading compressed sstables (CASSANDRA-5820) * (Hadoop) fix quoting in CqlPagingRecordReader and CqlRecordWriter @@ -13,16 +15,11 @@ Merged from 1.2: * update default LCS sstable size to 160MB (CASSANDRA-5727) * Allow compacting 2Is via nodetool (CASSANDRA-5670) * Hex-encode non-String keys in OPP (CASSANDRA-5793) - - -1.2.8 - Allow compacting 2Is via nodetool * Fix reading DeletionTime from 1.1-format sstables (CASSANDRA-5814) * cqlsh: add collections support to COPY (CASSANDRA-5698) * retry important messages for any IOException (CASSANDRA-5804) * Allow empty IN relations in SELECT/UPDATE/DELETE statements (CASSANDRA-5626) * cqlsh: fix crashing on Windows due to libedit detection (CASSANDRA-5812) - * Use index_interval from cassandra.yaml when upgraded (CASSANDRA-5822) 2.0.0-beta2 http://git-wip-us.apache.org/repos/asf/cassandra/blob/22bf2c40/src/java/org/apache/cassandra/streaming/messages/StreamMessage.java -- diff --git a/src/java/org/apache/cassandra/streaming/messages/StreamMessage.java b/src/java/org/apache/cassandra/streaming/messages/StreamMessage.java index 8ba731a..2e7341b 100644 --- a/src/java/org/apache/cassandra/streaming/messages/StreamMessage.java +++ b/src/java/org/apache/cassandra/streaming/messages/StreamMessage.java @@ -47,10 +47,18 @@ public abstract class StreamMessage public static StreamMessage deserialize(ReadableByteChannel in, int version, StreamSession session) throws IOException { ByteBuffer buff = ByteBuffer.allocate(1); -in.read(buff); -buff.flip(); -Type type = Type.get(buff.get()); -return type.serializer.deserialize(in, version, session); +if (in.read(buff) 0) +{ +buff.flip(); +Type type = Type.get(buff.get()); +return type.serializer.deserialize(in, version, session); +} +else +{ +// when socket gets closed, there is a chance that buff is empty +// in that case, just return null +return null; +} } /** StreamMessage serializer */
[jira] [Commented] (CASSANDRA-5792) Buffer Underflow during streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724568#comment-13724568 ] Yuki Morishita commented on CASSANDRA-5792: --- Committed. Buffer Underflow during streaming - Key: CASSANDRA-5792 URL: https://issues.apache.org/jira/browse/CASSANDRA-5792 Project: Cassandra Issue Type: Bug Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 2.0 rc1 Attachments: 5792.txt {noformat} ERROR [STREAM-IN-/127.0.0.3] 2013-07-22 16:19:50,597 StreamSession.java (line 414) Streaming error occurred java.nio.BufferUnderflowException at java.nio.Buffer.nextGetIndex(Buffer.java:492) at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:135) at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:52) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:288) at java.lang.Thread.run(Thread.java:722) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5792) Buffer Underflow during streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-5792: -- Reviewer: jbellis (was: slebresne) Buffer Underflow during streaming - Key: CASSANDRA-5792 URL: https://issues.apache.org/jira/browse/CASSANDRA-5792 Project: Cassandra Issue Type: Bug Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 2.0 rc1 Attachments: 5792.txt {noformat} ERROR [STREAM-IN-/127.0.0.3] 2013-07-22 16:19:50,597 StreamSession.java (line 414) Streaming error occurred java.nio.BufferUnderflowException at java.nio.Buffer.nextGetIndex(Buffer.java:492) at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:135) at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:52) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:288) at java.lang.Thread.run(Thread.java:722) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5792) Buffer Underflow during streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-5792: -- Fix Version/s: (was: 2.0) 2.0 rc1 Buffer Underflow during streaming - Key: CASSANDRA-5792 URL: https://issues.apache.org/jira/browse/CASSANDRA-5792 Project: Cassandra Issue Type: Bug Reporter: Brandon Williams Assignee: Yuki Morishita Fix For: 2.0 rc1 Attachments: 5792.txt {noformat} ERROR [STREAM-IN-/127.0.0.3] 2013-07-22 16:19:50,597 StreamSession.java (line 414) Streaming error occurred java.nio.BufferUnderflowException at java.nio.Buffer.nextGetIndex(Buffer.java:492) at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:135) at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:52) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:288) at java.lang.Thread.run(Thread.java:722) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5833) Duplicate classes in Cassandra-all package.
sam schumer created CASSANDRA-5833: -- Summary: Duplicate classes in Cassandra-all package. Key: CASSANDRA-5833 URL: https://issues.apache.org/jira/browse/CASSANDRA-5833 Project: Cassandra Issue Type: Bug Components: API, Packaging Affects Versions: 1.2.7, 1.1.6 Reporter: sam schumer As of Cassandra-All version 1.1.6 the classes org.apache.cassandra.thrift.ITransportFactory and org.apache.cassandra.thrift.TFramedTransportFactory are located in both the cassandra-thrift and the cassandra-all Maven JARS, and caasandra-thrift is imported by cassandra-all POM. This makes the cassandra-all package unbuildable when using the duplicate-finder Maven extension. The files were originally copied over due to [CASSANDRA-4668|https://issues.apache.org/jira/browse/CASSANDRA-4668]. All versions since have failed to build when using this maven extension. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5823) nodetool history logging
[ https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724685#comment-13724685 ] Dave Brosius commented on CASSANDRA-5823: - nice. just a q... theoretically you could have both old and new locations populated (perhaps because you run an old version after running a new version. [perhaps only a developer issue?] Which means you will try to copy the old over top of the new when you run the new again. If you are going to try to preserve the old files (and i suppose one could argue whether that really was necessary) you should probably check to see if there AREN'T new files there already. small nit, not sure FBUtilities.getHistoryDirectory() is the right name, but ok nodetool history logging Key: CASSANDRA-5823 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Fix For: 1.2.8, 2.0 rc1 Attachments: 5823-v1.patch, 5823-v2.patch Capture the commands and time executed from nodetool into a log file, similar to the cli. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5823) nodetool history logging
[ https://issues.apache.org/jira/browse/CASSANDRA-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13724891#comment-13724891 ] Jason Brown commented on CASSANDRA-5823: bq. perhaps because you run an old version after running a new version. [perhaps only a developer issue?] Heh, funny you mention that, because I did run into that today :). However, I suspect once committed to 1.2 and trunk, everyone (including all us c* devs) won't necessarily be going back to older versions and running into the older files problem. It's easy enough to check if the new files exist before copying; I can add in that additional check. bq. not sure FBUtilities.getHistoryDirectory() is the right name Agreed. Maybe FBU.getToolsOutputDirectory() ? nodetool history logging Key: CASSANDRA-5823 URL: https://issues.apache.org/jira/browse/CASSANDRA-5823 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Fix For: 1.2.8, 2.0 rc1 Attachments: 5823-v1.patch, 5823-v2.patch Capture the commands and time executed from nodetool into a log file, similar to the cli. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira