[jira] [Resolved] (CASSANDRA-5698) Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import
[ https://issues.apache.org/jira/browse/CASSANDRA-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko resolved CASSANDRA-5698. -- Resolution: Later Nothing is compromised, but collections import isn't supported by cqlsh, yet, if either keys or values require quoting (ascii, text(varchar), timestamp, and inet types). Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import Key: CASSANDRA-5698 URL: https://issues.apache.org/jira/browse/CASSANDRA-5698 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.2, 1.1.3 Environment: Using the Copy of cqlsh, the data included the “{“ and “[“ (= CollectionType) case, I think in the export / import process, data integrity is compromised. Reporter: Hiroshi Kise Assignee: Aleksey Yeschenko Priority: Minor Concrete operation is as follows. -*-*-*-*-*-*-*-* (1)map type's export/import export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace maptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use maptestks; cqlsh:maptestks create table maptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:maptestks insert into maptestcf (rowkey, targetmap) values ('rowkey',{'mapkey':'mapvalue'}); cqlsh:maptestks select * from maptestcf; rowkey | targetmap + rowkey | {mapkey: mapvalue} cqlsh:maptestks copy maptestcf to 'maptestcf-20130619.txt'; 1 rows exported in 0.008 seconds. cqlsh:maptestks exit; [root@castor bin]# cat maptestcf-20130619.txt rowkey,{mapkey: mapvalue} (a) import [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace mapimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use mapimptestks; cqlsh:mapimptestks create table mapimptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:mapimptestks copy mapimptestcf from ' maptestcf-20130619.txt '; Bad Request: line 1:83 no viable alternative at input '}' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.025 seconds. -*-*-*-*-*-*-*-* (2)list type's export/import export [root@castor bin]#./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listtestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listtestks; cqlsh:listtestks create table listtestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listtestks insert into listtestcf (rowkey,value) values ('rowkey',['value1','value2']); cqlsh:listtestks select * from listtestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:listtestks copy listtestcf to 'listtestcf-20130619.txt'; 1 rows exported in 0.014 seconds. cqlsh:listtestks exit; [root@castor bin]# cat listtestcf-20130619.txt rowkey,[value1, value2] (b) export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listimptestks; cqlsh:listimptestks create table listimptestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listimptestks copy listimptestcf from ' listtestcf-20130619.txt '; Bad Request: line 1:79 no viable alternative at input ']' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.030 seconds. -*-*-*-*-*-*-*-* Reference: (correct, or error, in another dimension) Manually, I have rewritten the export file. [root@castor bin]# cat nlisttestcf-20130619.txt rowkey,['value1',' value2'] cqlsh:listimptestks copy listimptestcf from 'nlisttestcf-20130619.txt'; 1 rows imported in 0.035 seconds. cqlsh:listimptestks select * from implisttestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:implisttestks
[jira] [Commented] (CASSANDRA-5698) Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import
[ https://issues.apache.org/jira/browse/CASSANDRA-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13692801#comment-13692801 ] Jonathan Ellis commented on CASSANDRA-5698: --- Should we tag it 2.0.1 instead of closing? Seems reasonably important for COPY to be feature-complete to me. Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import Key: CASSANDRA-5698 URL: https://issues.apache.org/jira/browse/CASSANDRA-5698 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.2, 1.1.3 Environment: Using the Copy of cqlsh, the data included the “{“ and “[“ (= CollectionType) case, I think in the export / import process, data integrity is compromised. Reporter: Hiroshi Kise Assignee: Aleksey Yeschenko Priority: Minor Concrete operation is as follows. -*-*-*-*-*-*-*-* (1)map type's export/import export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace maptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use maptestks; cqlsh:maptestks create table maptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:maptestks insert into maptestcf (rowkey, targetmap) values ('rowkey',{'mapkey':'mapvalue'}); cqlsh:maptestks select * from maptestcf; rowkey | targetmap + rowkey | {mapkey: mapvalue} cqlsh:maptestks copy maptestcf to 'maptestcf-20130619.txt'; 1 rows exported in 0.008 seconds. cqlsh:maptestks exit; [root@castor bin]# cat maptestcf-20130619.txt rowkey,{mapkey: mapvalue} (a) import [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace mapimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use mapimptestks; cqlsh:mapimptestks create table mapimptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:mapimptestks copy mapimptestcf from ' maptestcf-20130619.txt '; Bad Request: line 1:83 no viable alternative at input '}' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.025 seconds. -*-*-*-*-*-*-*-* (2)list type's export/import export [root@castor bin]#./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listtestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listtestks; cqlsh:listtestks create table listtestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listtestks insert into listtestcf (rowkey,value) values ('rowkey',['value1','value2']); cqlsh:listtestks select * from listtestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:listtestks copy listtestcf to 'listtestcf-20130619.txt'; 1 rows exported in 0.014 seconds. cqlsh:listtestks exit; [root@castor bin]# cat listtestcf-20130619.txt rowkey,[value1, value2] (b) export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listimptestks; cqlsh:listimptestks create table listimptestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listimptestks copy listimptestcf from ' listtestcf-20130619.txt '; Bad Request: line 1:79 no viable alternative at input ']' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.030 seconds. -*-*-*-*-*-*-*-* Reference: (correct, or error, in another dimension) Manually, I have rewritten the export file. [root@castor bin]# cat nlisttestcf-20130619.txt rowkey,['value1',' value2'] cqlsh:listimptestks copy listimptestcf from 'nlisttestcf-20130619.txt'; 1 rows imported in 0.035 seconds. cqlsh:listimptestks select * from implisttestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:implisttestks exit; [root@castor bin]# cat
[jira] [Reopened] (CASSANDRA-5698) Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import
[ https://issues.apache.org/jira/browse/CASSANDRA-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko reopened CASSANDRA-5698: -- Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import Key: CASSANDRA-5698 URL: https://issues.apache.org/jira/browse/CASSANDRA-5698 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.2, 1.1.3 Environment: Using the Copy of cqlsh, the data included the “{“ and “[“ (= CollectionType) case, I think in the export / import process, data integrity is compromised. Reporter: Hiroshi Kise Assignee: Aleksey Yeschenko Priority: Minor Concrete operation is as follows. -*-*-*-*-*-*-*-* (1)map type's export/import export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace maptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use maptestks; cqlsh:maptestks create table maptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:maptestks insert into maptestcf (rowkey, targetmap) values ('rowkey',{'mapkey':'mapvalue'}); cqlsh:maptestks select * from maptestcf; rowkey | targetmap + rowkey | {mapkey: mapvalue} cqlsh:maptestks copy maptestcf to 'maptestcf-20130619.txt'; 1 rows exported in 0.008 seconds. cqlsh:maptestks exit; [root@castor bin]# cat maptestcf-20130619.txt rowkey,{mapkey: mapvalue} (a) import [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace mapimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use mapimptestks; cqlsh:mapimptestks create table mapimptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:mapimptestks copy mapimptestcf from ' maptestcf-20130619.txt '; Bad Request: line 1:83 no viable alternative at input '}' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.025 seconds. -*-*-*-*-*-*-*-* (2)list type's export/import export [root@castor bin]#./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listtestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listtestks; cqlsh:listtestks create table listtestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listtestks insert into listtestcf (rowkey,value) values ('rowkey',['value1','value2']); cqlsh:listtestks select * from listtestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:listtestks copy listtestcf to 'listtestcf-20130619.txt'; 1 rows exported in 0.014 seconds. cqlsh:listtestks exit; [root@castor bin]# cat listtestcf-20130619.txt rowkey,[value1, value2] (b) export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listimptestks; cqlsh:listimptestks create table listimptestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listimptestks copy listimptestcf from ' listtestcf-20130619.txt '; Bad Request: line 1:79 no viable alternative at input ']' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.030 seconds. -*-*-*-*-*-*-*-* Reference: (correct, or error, in another dimension) Manually, I have rewritten the export file. [root@castor bin]# cat nlisttestcf-20130619.txt rowkey,['value1',' value2'] cqlsh:listimptestks copy listimptestcf from 'nlisttestcf-20130619.txt'; 1 rows imported in 0.035 seconds. cqlsh:listimptestks select * from implisttestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:implisttestks exit; [root@castor bin]# cat nmaptestcf-20130619.txt rowkey,”{'mapkey': 'mapvalue'}” [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 |
[jira] [Updated] (CASSANDRA-5698) cqlsh should support colletions in COPY FROM
[ https://issues.apache.org/jira/browse/CASSANDRA-5698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5698: - Fix Version/s: 2.0.1 Summary: cqlsh should support colletions in COPY FROM (was: Confirm with cqlsh of Cassandra-1.2.5, the behavior of the export/import) As you wish cqlsh should support colletions in COPY FROM Key: CASSANDRA-5698 URL: https://issues.apache.org/jira/browse/CASSANDRA-5698 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.1.2, 1.1.3 Environment: Using the Copy of cqlsh, the data included the “{“ and “[“ (= CollectionType) case, I think in the export / import process, data integrity is compromised. Reporter: Hiroshi Kise Assignee: Aleksey Yeschenko Priority: Minor Fix For: 2.0.1 Concrete operation is as follows. -*-*-*-*-*-*-*-* (1)map type's export/import export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace maptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use maptestks; cqlsh:maptestks create table maptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:maptestks insert into maptestcf (rowkey, targetmap) values ('rowkey',{'mapkey':'mapvalue'}); cqlsh:maptestks select * from maptestcf; rowkey | targetmap + rowkey | {mapkey: mapvalue} cqlsh:maptestks copy maptestcf to 'maptestcf-20130619.txt'; 1 rows exported in 0.008 seconds. cqlsh:maptestks exit; [root@castor bin]# cat maptestcf-20130619.txt rowkey,{mapkey: mapvalue} (a) import [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace mapimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use mapimptestks; cqlsh:mapimptestks create table mapimptestcf (rowkey varchar PRIMARY KEY, targetmap mapvarchar,varchar); cqlsh:mapimptestks copy mapimptestcf from ' maptestcf-20130619.txt '; Bad Request: line 1:83 no viable alternative at input '}' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.025 seconds. -*-*-*-*-*-*-*-* (2)list type's export/import export [root@castor bin]#./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listtestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listtestks; cqlsh:listtestks create table listtestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listtestks insert into listtestcf (rowkey,value) values ('rowkey',['value1','value2']); cqlsh:listtestks select * from listtestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:listtestks copy listtestcf to 'listtestcf-20130619.txt'; 1 rows exported in 0.014 seconds. cqlsh:listtestks exit; [root@castor bin]# cat listtestcf-20130619.txt rowkey,[value1, value2] (b) export [root@castor bin]# ./cqlsh Connected to Test Cluster at localhost:9160. [cqlsh 3.0.2 | Cassandra 1.2.5 | CQL spec 3.0.0 | Thrift protocol 19.36.0] Use HELP for help. cqlsh create keyspace listimptestks with replication = { 'class' : 'SimpleStrategy', 'replication_factor' : '1' }; cqlsh use listimptestks; cqlsh:listimptestks create table listimptestcf (rowkey varchar PRIMARY KEY, value listvarchar); cqlsh:listimptestks copy listimptestcf from ' listtestcf-20130619.txt '; Bad Request: line 1:79 no viable alternative at input ']' Aborting import at record #0 (line 1). Previously-inserted values still present. 0 rows imported in 0.030 seconds. -*-*-*-*-*-*-*-* Reference: (correct, or error, in another dimension) Manually, I have rewritten the export file. [root@castor bin]# cat nlisttestcf-20130619.txt rowkey,['value1',' value2'] cqlsh:listimptestks copy listimptestcf from 'nlisttestcf-20130619.txt'; 1 rows imported in 0.035 seconds. cqlsh:listimptestks select * from implisttestcf; rowkey | value +-- rowkey | [value1, value2] cqlsh:implisttestks exit; [root@castor bin]# cat
[jira] [Commented] (CASSANDRA-5697) Semi-colons not Allowed in CQL3 Batch Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13692808#comment-13692808 ] Aleksey Yeschenko commented on CASSANDRA-5697: -- The semicolon is optional. You are getting this error because cqlsh splits its statements internally. Using python-cql directly (or any other client), the semicolons will be accepted. It's questionable whether or not messing with cqlsh internals just to allow the semicolons in BATCH is worth it, but I'll have a look later today. Semi-colons not Allowed in CQL3 Batch Statements Key: CASSANDRA-5697 URL: https://issues.apache.org/jira/browse/CASSANDRA-5697 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Environment: Mac OSX, cqlsh 3.0.2 Reporter: Russell Alexander Spitzer Assignee: Aleksey Yeschenko Priority: Minor Labels: cql The documentation for BATCH statements declares that semicolons are required between update operations. Currently including them results in an error 'expecting K_APPLY'. To match the design specifications, semi-colons should be allowed or optional. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5697) cqlsh doesn't allow semicolons in BATCH statements
[ https://issues.apache.org/jira/browse/CASSANDRA-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5697: - Component/s: (was: API) Tools Labels: cqlsh (was: cql) cqlsh doesn't allow semicolons in BATCH statements -- Key: CASSANDRA-5697 URL: https://issues.apache.org/jira/browse/CASSANDRA-5697 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 1.2.0 Environment: Mac OSX, cqlsh 3.0.2 Reporter: Russell Alexander Spitzer Assignee: Aleksey Yeschenko Priority: Minor Labels: cqlsh The documentation for BATCH statements declares that semicolons are required between update operations. Currently including them results in an error 'expecting K_APPLY'. To match the design specifications, semi-colons should be allowed or optional. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5697) cqlsh doesn't allow semicolons in BATCH statements
[ https://issues.apache.org/jira/browse/CASSANDRA-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5697: - Summary: cqlsh doesn't allow semicolons in BATCH statements (was: Semi-colons not Allowed in CQL3 Batch Statements) cqlsh doesn't allow semicolons in BATCH statements -- Key: CASSANDRA-5697 URL: https://issues.apache.org/jira/browse/CASSANDRA-5697 Project: Cassandra Issue Type: Bug Components: API Affects Versions: 1.2.0 Environment: Mac OSX, cqlsh 3.0.2 Reporter: Russell Alexander Spitzer Assignee: Aleksey Yeschenko Priority: Minor Labels: cql The documentation for BATCH statements declares that semicolons are required between update operations. Currently including them results in an error 'expecting K_APPLY'. To match the design specifications, semi-colons should be allowed or optional. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[2/2] git commit: Changelog update
Changelog update Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a6ca5d49 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a6ca5d49 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a6ca5d49 Branch: refs/heads/cassandra-1.2 Commit: a6ca5d496facf79c187310e81b3eeba3e6bc4b43 Parents: 2d90eb6 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jun 25 09:58:10 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jun 25 09:58:10 2013 +0200 -- CHANGES.txt | 7 ++- 1 file changed, 2 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6ca5d49/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index be0c1d0..2931916 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,8 +1,3 @@ -1.2.7 - * Fix ReadResponseSerializer.serializedSize() for digest reads (CASSANDRA-5476) - * allow sstable2json on 2i CFs (CASSANDRA-5694) - - 1.2.6 * Fix tracing when operation completes before all responses arrive (CASSANDRA-5668) * Fix cross-DC mutation forwarding (CASSANDRA-5632) @@ -39,6 +34,8 @@ * Gossiper incorrectly drops AppState for an upgrading node (CASSANDRA-5660) * Connection thrashing during multi-region ec2 during upgrade, due to messaging version (CASSANDRA-5669) * Avoid over reconnecting in EC2MRS (CASSANDRA-5678) + * Fix ReadResponseSerializer.serializedSize() for digest reads (CASSANDRA-5476) + * allow sstable2json on 2i CFs (CASSANDRA-5694) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
[1/2] git commit: Revert #5665 (b7e13b89c265c28acfb624a984b97a06a837c3ea) due to tests failures
Updated Branches: refs/heads/cassandra-1.2 81619fe9c - a6ca5d496 Revert #5665 (b7e13b89c265c28acfb624a984b97a06a837c3ea) due to tests failures Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d90eb65 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d90eb65 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d90eb65 Branch: refs/heads/cassandra-1.2 Commit: 2d90eb65bffd5787ff77403a4f3bc05605cfcd5a Parents: 81619fe Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jun 25 09:56:14 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jun 25 09:56:14 2013 +0200 -- src/java/org/apache/cassandra/gms/Gossiper.java | 46 +--- 1 file changed, 21 insertions(+), 25 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d90eb65/src/java/org/apache/cassandra/gms/Gossiper.java -- diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java b/src/java/org/apache/cassandra/gms/Gossiper.java index b629824..efa9865 100644 --- a/src/java/org/apache/cassandra/gms/Gossiper.java +++ b/src/java/org/apache/cassandra/gms/Gossiper.java @@ -871,7 +871,6 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean if (logger.isTraceEnabled()) logger.trace(Updating heartbeat state generation to + remoteGeneration + from + localGeneration + for + ep); // major state change will handle the update by inserting the remote state directly -copyNewerApplicationStates(remoteState, localEpStatePtr); handleMajorStateChange(ep, remoteState); } else if ( remoteGeneration == localGeneration ) // generation has not changed, apply new states @@ -881,18 +880,11 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean int remoteMaxVersion = getMaxEndpointStateVersion(remoteState); if ( remoteMaxVersion localMaxVersion ) { -if (logger.isTraceEnabled()) -{ -logger.trace(Updating heartbeat state version to + remoteState.getHeartBeatState().getHeartBeatVersion() + - from + localEpStatePtr.getHeartBeatState().getHeartBeatVersion() + for + ep); -} - localEpStatePtr.setHeartBeatState(remoteState.getHeartBeatState()); -MapApplicationState, VersionedValue merged = copyNewerApplicationStates(localEpStatePtr, remoteState); -for (EntryApplicationState, VersionedValue appState : merged.entrySet()) -doNotifications(ep, appState.getKey(), appState.getValue()); +// apply states, but do not notify since there is no major change +applyNewStates(ep, localEpStatePtr, remoteState); } else if (logger.isTraceEnabled()) -logger.trace(Ignoring remote version + remoteMaxVersion + = + localMaxVersion + for + ep); +logger.trace(Ignoring remote version + remoteMaxVersion + = + localMaxVersion + for + ep); if (!localEpStatePtr.isAlive() !isDeadState(localEpStatePtr)) // unless of course, it was dead markAlive(ep, localEpStatePtr); } @@ -911,24 +903,28 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean } } -private MapApplicationState, VersionedValue copyNewerApplicationStates(EndpointState toState, EndpointState fromState) +private void applyNewStates(InetAddress addr, EndpointState localState, EndpointState remoteState) { -MapApplicationState, VersionedValue merged = new HashMapApplicationState, VersionedValue(); -for (EntryApplicationState, VersionedValue fromEntry : fromState.getApplicationStateMap().entrySet()) +// don't assert here, since if the node restarts the version will go back to zero +int oldVersion = localState.getHeartBeatState().getHeartBeatVersion(); + +localState.setHeartBeatState(remoteState.getHeartBeatState()); +if (logger.isTraceEnabled()) +logger.trace(Updating heartbeat state version to + localState.getHeartBeatState().getHeartBeatVersion() + from + oldVersion + for + addr + ...); + +// we need to make two loops here, one to apply, then another to notify, this way all states in an update are present and current when the
Git Push Summary
Updated Tags: refs/tags/1.2.6-tentative [deleted] 043a63c63
Git Push Summary
Updated Tags: refs/tags/1.2.6-tentative [created] a6ca5d496
[jira] [Updated] (CASSANDRA-5665) Gossiper.handleMajorStateChange can lose existing node ApplicationState
[ https://issues.apache.org/jira/browse/CASSANDRA-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5665: Fix Version/s: (was: 1.2.6) 1.2.7 Gossiper.handleMajorStateChange can lose existing node ApplicationState --- Key: CASSANDRA-5665 URL: https://issues.apache.org/jira/browse/CASSANDRA-5665 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Labels: gossip, upgrade Fix For: 1.2.7, 2.0 beta 1 Attachments: 5665-v1.diff, 5665-v2.diff Dovetailing on CASSANDRA-5660, I discovered that further along during an upgrade, when more nodes are on the new major version, a node the previous version can get passed some incomplete Gossip info about another, already upgraded node, and the older node drops AppStat info about that node. I think what happens is that a 1.1 node (older rev) gets gossip info from a 1.2 node (A), which includes incomplete (lacking some AppState data) gossip info about another 1.2 node (B). The 1.1 node, which has marked incorrectly kicked node B out of gossip due to the bug described in #5660, then takes that incomplete node B info and wholesale replaces any previous known state about node B in Gossiper.handleMajorStateChanged. Thus, if we previously had DC/RACK info, it'll get dropped as part of the endpointStateMap.put(endpointstate). When the data being pased is incomplete, 1.1 will start referencing node B and gets into the NPE situation in #5498. Anecdotally, this bad state is short-lived, less than a few minutes, even as short as ten seconds, until gossip catches up and properly propagates the AppState data. Furthermore, when upgrading a two datacenter, 48 node cluster, it only occurred on two nodes for less than a minute each. Thus, the scope seems limited but can occur. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Reopened] (CASSANDRA-5665) Gossiper.handleMajorStateChange can lose existing node ApplicationState
[ https://issues.apache.org/jira/browse/CASSANDRA-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne reopened CASSANDRA-5665: - I've reverted this since this is causing test issues. Gossiper.handleMajorStateChange can lose existing node ApplicationState --- Key: CASSANDRA-5665 URL: https://issues.apache.org/jira/browse/CASSANDRA-5665 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Labels: gossip, upgrade Fix For: 1.2.6, 2.0 beta 1 Attachments: 5665-v1.diff, 5665-v2.diff Dovetailing on CASSANDRA-5660, I discovered that further along during an upgrade, when more nodes are on the new major version, a node the previous version can get passed some incomplete Gossip info about another, already upgraded node, and the older node drops AppStat info about that node. I think what happens is that a 1.1 node (older rev) gets gossip info from a 1.2 node (A), which includes incomplete (lacking some AppState data) gossip info about another 1.2 node (B). The 1.1 node, which has marked incorrectly kicked node B out of gossip due to the bug described in #5660, then takes that incomplete node B info and wholesale replaces any previous known state about node B in Gossiper.handleMajorStateChanged. Thus, if we previously had DC/RACK info, it'll get dropped as part of the endpointStateMap.put(endpointstate). When the data being pased is incomplete, 1.1 will start referencing node B and gets into the NPE situation in #5498. Anecdotally, this bad state is short-lived, less than a few minutes, even as short as ten seconds, until gossip catches up and properly propagates the AppState data. Furthermore, when upgrading a two datacenter, 48 node cluster, it only occurred on two nodes for less than a minute each. Thus, the scope seems limited but can occur. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5476) Exceptions in 1.1 nodes with 1.2 nodes in ring
[ https://issues.apache.org/jira/browse/CASSANDRA-5476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5476: Fix Version/s: (was: 1.2.7) 1.2.6 Exceptions in 1.1 nodes with 1.2 nodes in ring -- Key: CASSANDRA-5476 URL: https://issues.apache.org/jira/browse/CASSANDRA-5476 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.9, 1.2.5 Reporter: John Watson Fix For: 1.2.6 Attachments: 0001.patch As 1.1.9 nodes were being upgraded to 1.2.3 nodes, the 1.1.9 nodes started having this exception: {noformat} Exception in thread Thread[RequestResponseStage:19496,5,main] java.io.IOError: java.io.EOFException at org.apache.cassandra.service.AbstractRowResolver.preprocess(AbstractRowResolver.java:71) at org.apache.cassandra.service.ReadCallback.response(ReadCallback.java:155) at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:45) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.EOFException at java.io.DataInputStream.readFully(DataInputStream.java:180) at org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:100) at org.apache.cassandra.db.ReadResponseSerializer.deserialize(ReadResponse.java:81) at org.apache.cassandra.service.AbstractRowResolver.preprocess(AbstractRowResolver.java:64) ... 6 more {noformat} As more 1.2.3 nodes were upgraded, the 1.2.3 nodes began logging for 1.1.9 node IPs: {noformat} Unable to store hint for host with missing ID, /10.37.62.71 (old node?) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5694) Allow sstable2json to dump SecondaryIndex SSTables
[ https://issues.apache.org/jira/browse/CASSANDRA-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5694: Fix Version/s: 1.2.6 Allow sstable2json to dump SecondaryIndex SSTables -- Key: CASSANDRA-5694 URL: https://issues.apache.org/jira/browse/CASSANDRA-5694 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Michał Michalski Assignee: Michał Michalski Priority: Minor Fix For: 1.2.6 Attachments: sstable2json-for-2i-v1.txt, sstable2json-for-2i-v2.txt When investigating some 2I problems I had, I was pretty disappointed that sstable2json does not allow me to dump SecondaryIndex, saying that The provided column family is not part of this cassandra database: keyspace = testks, column family = testcf.testcf_col_idx. I'm attaching patch that fixes given problem by changing a bit the way that sst2j validates the input file. My only concern is that this solution uses StorageService for validation, so it's a bit heavier than it should, as it has to set everything up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/4] Add auto paging capability to the native protocol
Updated Branches: refs/heads/trunk 40bc4456f - e48ff2938 http://git-wip-us.apache.org/repos/asf/cassandra/blob/e48ff293/src/java/org/apache/cassandra/transport/messages/QueryMessage.java -- diff --git a/src/java/org/apache/cassandra/transport/messages/QueryMessage.java b/src/java/org/apache/cassandra/transport/messages/QueryMessage.java index e334b02..860c404 100644 --- a/src/java/org/apache/cassandra/transport/messages/QueryMessage.java +++ b/src/java/org/apache/cassandra/transport/messages/QueryMessage.java @@ -22,6 +22,7 @@ import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.UUID; + import com.google.common.collect.ImmutableMap; import org.jboss.netty.buffer.ChannelBuffer; @@ -44,6 +45,7 @@ public class QueryMessage extends Message.Request { String query = CBUtil.readLongString(body); ConsistencyLevel consistency = CBUtil.readConsistencyLevel(body); +int resultPageSize = body.readInt(); ListByteBuffer values; if (body.readable()) { @@ -56,10 +58,10 @@ public class QueryMessage extends Message.Request { values = Collections.emptyList(); } -return new QueryMessage(query, values, consistency); +return new QueryMessage(query, values, consistency, resultPageSize); } -public ChannelBuffer encode(QueryMessage msg) +public ChannelBuffer encode(QueryMessage msg, int version) { // We have: // - query @@ -68,11 +70,13 @@ public class QueryMessage extends Message.Request // - Number of values // - The values int vs = msg.values.size(); -CBUtil.BufferBuilder builder = new CBUtil.BufferBuilder(3, 0, vs); +CBUtil.BufferBuilder builder = new CBUtil.BufferBuilder(3 + (vs 0 ? 1 : 0), 0, vs); builder.add(CBUtil.longStringToCB(msg.query)); builder.add(CBUtil.consistencyLevelToCB(msg.consistency)); -if (vs 0 msg.getVersion() 1) +builder.add(CBUtil.intToCB(msg.resultPageSize)); +if (vs 0) { +assert version 1 : Version 1 of the protocol do not allow values; builder.add(CBUtil.shortToCB(vs)); for (ByteBuffer value : msg.values) builder.addValue(value); @@ -83,30 +87,35 @@ public class QueryMessage extends Message.Request public final String query; public final ConsistencyLevel consistency; +public final int resultPageSize; public final ListByteBuffer values; public QueryMessage(String query, ConsistencyLevel consistency) { -this(query, Collections.ByteBufferemptyList(), consistency); +this(query, Collections.ByteBufferemptyList(), consistency, -1); } -public QueryMessage(String query, ListByteBuffer values, ConsistencyLevel consistency) +public QueryMessage(String query, ListByteBuffer values, ConsistencyLevel consistency, int resultPageSize) { super(Type.QUERY); this.query = query; +this.resultPageSize = resultPageSize; this.consistency = consistency; this.values = values; } public ChannelBuffer encode() { -return codec.encode(this); +return codec.encode(this, getVersion()); } public Message.Response execute(QueryState state) { try { +if (resultPageSize == 0) +throw new ProtocolException(The page size cannot be 0); + UUID tracingId = null; if (isTracingRequested()) { @@ -117,10 +126,16 @@ public class QueryMessage extends Message.Request if (state.traceNextQuery()) { state.createTracingSession(); -Tracing.instance.begin(Execute CQL3 query, ImmutableMap.of(query, query)); + +ImmutableMap.BuilderString, String builder = ImmutableMap.builder(); +builder.put(query, query); +if (resultPageSize 0) +builder.put(page_size, Integer.toString(resultPageSize)); + +Tracing.instance.begin(Execute CQL3 query, builder.build()); } -Message.Response response = QueryProcessor.process(query, values, consistency, state); +Message.Response response = QueryProcessor.process(query, values, consistency, state, resultPageSize); if (tracingId != null) response.setTracingId(tracingId); @@ -136,6 +151,9 @@ public class QueryMessage extends Message.Request finally { Tracing.instance.stopSession(); +// Trash the current session id if we won't need it anymore +if (!state.hasPager()) +
git commit: Nits
Updated Branches: refs/heads/trunk e48ff2938 - 764620368 Nits Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/76462036 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/76462036 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/76462036 Branch: refs/heads/trunk Commit: 76462036843b51f0e875154b2e150a7a52b0e84f Parents: e48ff29 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jun 25 10:33:27 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jun 25 10:33:27 2013 +0200 -- .../cassandra/service/pager/AbstractQueryPager.java | 1 - .../org/apache/cassandra/service/pager/QueryPagers.java | 12 ++-- 2 files changed, 6 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/76462036/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java -- diff --git a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java index 460bc44..49a2c1e 100644 --- a/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java +++ b/src/java/org/apache/cassandra/service/pager/AbstractQueryPager.java @@ -109,7 +109,6 @@ abstract class AbstractQueryPager implements QueryPager private ListRow filterEmpty(ListRow result) { -boolean doCopy = false; for (Row row : result) { if (row.cf == null || row.cf.getColumnCount() == 0) http://git-wip-us.apache.org/repos/asf/cassandra/blob/76462036/src/java/org/apache/cassandra/service/pager/QueryPagers.java -- diff --git a/src/java/org/apache/cassandra/service/pager/QueryPagers.java b/src/java/org/apache/cassandra/service/pager/QueryPagers.java index 9bc3afd..2920317 100644 --- a/src/java/org/apache/cassandra/service/pager/QueryPagers.java +++ b/src/java/org/apache/cassandra/service/pager/QueryPagers.java @@ -162,12 +162,12 @@ public class QueryPagers * Convenience method that count (live) cells/rows for a given slice of a row, but page underneath. */ public static int countPaged(String keyspace, -String columnFamily, -ByteBuffer key, -SliceQueryFilter filter, -ConsistencyLevel consistencyLevel, -final int pageSize, -long now) throws RequestValidationException, RequestExecutionException + String columnFamily, + ByteBuffer key, + SliceQueryFilter filter, + ConsistencyLevel consistencyLevel, + final int pageSize, + long now) throws RequestValidationException, RequestExecutionException { SliceFromReadCommand command = new SliceFromReadCommand(keyspace, key, columnFamily, now, filter); final SliceQueryPager pager = new SliceQueryPager(command, consistencyLevel, false);
[jira] [Updated] (CASSANDRA-5681) Refactor IESCS in Snitches
[ https://issues.apache.org/jira/browse/CASSANDRA-5681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-5681: --- Attachment: 5681-v2_YFNTS.diff [~jbellis] Working YFNTS to use the new ReconnectingSnitchHelper required a bit more of a refactor as it was not using the ApplicationState.INTERNAL_IP model (stored in gossiper) like Ec2MRS, but it instead maintained it's own similar model. Hence, as the work was a little more involved, can you review the YFNTS changes? Thanks Refactor IESCS in Snitches -- Key: CASSANDRA-5681 URL: https://issues.apache.org/jira/browse/CASSANDRA-5681 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5 Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Labels: snitch Fix For: 1.2.6, 2.0 beta 1 Attachments: 5681-v1.diff, 5681-v2_YFNTS.diff Reduce/refactor duplicated IESCS implementations in Ec2MRS, GPFS, and YPNTS. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5692) Race condition in detecting version on a mixed 1.1/1.2 cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693044#comment-13693044 ] Jason Brown commented on CASSANDRA-5692: [~tourist], can you rebase against 1.2? I can't apply the patch to either 1.2 nor trunk. thanks. Race condition in detecting version on a mixed 1.1/1.2 cluster -- Key: CASSANDRA-5692 URL: https://issues.apache.org/jira/browse/CASSANDRA-5692 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.9, 1.2.5 Reporter: Sergio Bossa Priority: Minor Attachments: 5692-0005.patch On a mixed 1.1 / 1.2 cluster, starting 1.2 nodes fires sometimes a race condition in version detection, where the 1.2 node wrongly detects version 6 for a 1.1 node. It works as follows: 1) The just started 1.2 node quickly opens an OutboundTcpConnection toward a 1.1 node before receiving any messages from the latter. 2) Given the version is correctly detected only when the first message is received, the version is momentarily set at 6. 3) This opens an OutboundTcpConnection from 1.2 to 1.1 at version 6, which gets stuck in the connect() method. Later, the version is correctly fixed, but all outbound connections from 1.2 to 1.1 are stuck at this point. Evidence from 1.2 logs: TRACE 13:48:31,133 Assuming current protocol version for /127.0.0.2 DEBUG 13:48:37,837 Setting version 5 for /127.0.0.2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5572) Write row markers when serializing columnfamilies and columns schema
[ https://issues.apache.org/jira/browse/CASSANDRA-5572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-5572: - Fix Version/s: 1.2.6 Write row markers when serializing columnfamilies and columns schema Key: CASSANDRA-5572 URL: https://issues.apache.org/jira/browse/CASSANDRA-5572 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.4 Reporter: Aleksey Yeschenko Assignee: Aleksey Yeschenko Priority: Minor Fix For: 1.2.6 Attachments: 5572.txt ColumnDefinition.toSchema() and CFMetaData.toSchemaNoColumns() currently don't write the row markers, which leads to certain queries not returning the expected results, e.g. select keyspace_name, columnfamily_name from system.schema_columnfamilies where keyspace_name = 'system' and columnfamily_name = 'hints' - []. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5699) Streaming (2.0) can deadlock
Sylvain Lebresne created CASSANDRA-5699: --- Summary: Streaming (2.0) can deadlock Key: CASSANDRA-5699 URL: https://issues.apache.org/jira/browse/CASSANDRA-5699 Project: Cassandra Issue Type: Bug Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Fix For: 2.0 beta 1 The new streaming implementation (CASSANDRA-5286) creates 2 threads per host for streaming, one for the incoming stream and one for the outgoing one. However, both currently share the same socket, but since we use synchronous I/O, a read can block a write, which can result in a deadlock if 2 nodes are both blocking on a read a the same time, thus blocking their respective writes (this is actually fairly easy to reproduce with a simple repair). So instead attaching a patch that uses one socket per thread. The patch also correct the stream throughput throttling calculation that was 8000 times lower than what it should be. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5699) Streaming (2.0) can deadlock
[ https://issues.apache.org/jira/browse/CASSANDRA-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5699: Attachment: 5699.txt Streaming (2.0) can deadlock Key: CASSANDRA-5699 URL: https://issues.apache.org/jira/browse/CASSANDRA-5699 Project: Cassandra Issue Type: Bug Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Fix For: 2.0 beta 1 Attachments: 5699.txt The new streaming implementation (CASSANDRA-5286) creates 2 threads per host for streaming, one for the incoming stream and one for the outgoing one. However, both currently share the same socket, but since we use synchronous I/O, a read can block a write, which can result in a deadlock if 2 nodes are both blocking on a read a the same time, thus blocking their respective writes (this is actually fairly easy to reproduce with a simple repair). So instead attaching a patch that uses one socket per thread. The patch also correct the stream throughput throttling calculation that was 8000 times lower than what it should be. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5691) Remove SimpleCondition
[ https://issues.apache.org/jira/browse/CASSANDRA-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Mazursky updated CASSANDRA-5691: Attachment: trunk-5691-v2.patch Attaching v2 patch. Can you please explain your point of view? Why do you prefer SimpleCondition that is not really a Condition to CountDownLatch? Remove SimpleCondition -- Key: CASSANDRA-5691 URL: https://issues.apache.org/jira/browse/CASSANDRA-5691 Project: Cassandra Issue Type: Bug Components: Core Reporter: Mikhail Mazursky Priority: Minor Attachments: trunk-5691.patch, trunk-5691-v2.patch Problematic scenario: 1. two threads get blocked in SimpleCondition.await(); 2. some thread calls SimpleCondition.signal(); 3. one of blocked threads wakes up and runs; 4. spurious wakeup happens in the second thread and it wakes up too and runs even though nobody signaled it. Thus this is a broken implementation of Condition interface. Anyway, looking at how code uses it, SimpleCondition can just be replaced with CountDownLatch. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5591) Windows failure renaming LCS json.
[ https://issues.apache.org/jira/browse/CASSANDRA-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693156#comment-13693156 ] Steve Peters commented on CASSANDRA-5591: - Windows file systems behave at as if updates to the directory entries are done asynchronously. If you have a case where you are deleting a file then immediately creating or renaming a file with the same name as the deleted file, you can run errors randomly saying that you are trying to create or rename a file that already exists. Of course, when you open the directory to look at it everything will be fine since the OS will be caught up. Windows failure renaming LCS json. -- Key: CASSANDRA-5591 URL: https://issues.apache.org/jira/browse/CASSANDRA-5591 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.4 Environment: Windows Reporter: Jeremiah Jordan Had someone report that on Windows, under load, the LCS json file sometimes fails to be renamed. {noformat} ERROR [CompactionExecutor:1] 2013-05-23 14:43:55,848 CassandraDaemon.java (line 174) Exception in thread Thread[CompactionExecutor:1,1,main] java.lang.RuntimeException: Failed to rename C:\development\tools\DataStax Community\data\data\zzz\zzz\zzz.json to C:\development\tools\DataStax Community\data\data\zzz\zzz\zzz-old.json at org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133) at org.apache.cassandra.db.compaction.LeveledManifest.serialize(LeveledManifest.java:617) at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:229) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:155) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:410) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:223) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:991) at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:230) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:188) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[1/3] Redesign repair messages
Updated Branches: refs/heads/trunk 764620368 - eb4fa4a62 http://git-wip-us.apache.org/repos/asf/cassandra/blob/eb4fa4a6/test/unit/org/apache/cassandra/service/AntiEntropyServiceTestAbstract.java -- diff --git a/test/unit/org/apache/cassandra/service/AntiEntropyServiceTestAbstract.java b/test/unit/org/apache/cassandra/service/AntiEntropyServiceTestAbstract.java index c930cc3..8905830 100644 --- a/test/unit/org/apache/cassandra/service/AntiEntropyServiceTestAbstract.java +++ b/test/unit/org/apache/cassandra/service/AntiEntropyServiceTestAbstract.java @@ -30,27 +30,23 @@ import org.junit.Before; import org.junit.Test; import org.apache.cassandra.SchemaLoader; -import org.apache.cassandra.Util; import org.apache.cassandra.concurrent.Stage; import org.apache.cassandra.concurrent.StageManager; import org.apache.cassandra.config.DatabaseDescriptor; -import org.apache.cassandra.config.Schema; -import org.apache.cassandra.db.*; -import org.apache.cassandra.db.compaction.PrecompactedRow; -import org.apache.cassandra.dht.IPartitioner; +import org.apache.cassandra.db.ColumnFamilyStore; +import org.apache.cassandra.db.IMutation; +import org.apache.cassandra.db.Table; import org.apache.cassandra.dht.Range; import org.apache.cassandra.dht.Token; import org.apache.cassandra.gms.Gossiper; import org.apache.cassandra.locator.AbstractReplicationStrategy; import org.apache.cassandra.locator.TokenMetadata; import org.apache.cassandra.net.MessagingService; -import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.repair.RepairJobDesc; import org.apache.cassandra.utils.FBUtilities; -import org.apache.cassandra.utils.MerkleTree; import static org.apache.cassandra.service.ActiveRepairService.*; import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; public abstract class AntiEntropyServiceTestAbstract extends SchemaLoader { @@ -59,7 +55,7 @@ public abstract class AntiEntropyServiceTestAbstract extends SchemaLoader public String tablename; public String cfname; -public TreeRequest request; +public RepairJobDesc desc; public ColumnFamilyStore store; public InetAddress LOCAL, REMOTE; @@ -107,11 +103,9 @@ public abstract class AntiEntropyServiceTestAbstract extends SchemaLoader local_range = StorageService.instance.getPrimaryRangesForEndpoint(tablename, LOCAL).iterator().next(); -// (we use REMOTE instead of LOCAL so that the reponses for the validator.complete() get lost) -int gcBefore = store.gcBefore(System.currentTimeMillis()); -request = new TreeRequest(UUID.randomUUID().toString(), REMOTE, local_range, gcBefore, new CFPair(tablename, cfname)); +desc = new RepairJobDesc(UUID.randomUUID(), tablename, cfname, local_range); // Set a fake session corresponding to this fake request -ActiveRepairService.instance.submitArtificialRepairSession(request, tablename, cfname); +ActiveRepairService.instance.submitArtificialRepairSession(desc); } @After @@ -121,51 +115,6 @@ public abstract class AntiEntropyServiceTestAbstract extends SchemaLoader } @Test -public void testValidatorPrepare() throws Throwable -{ -Validator validator; - -// write -Util.writeColumnFamily(getWriteData()); - -// sample -validator = new Validator(request); -validator.prepare(store); - -// and confirm that the tree was split -assertTrue(validator.tree.size() 1); -} - -@Test -public void testValidatorComplete() throws Throwable -{ -Validator validator = new Validator(request); -validator.prepare(store); -validator.completeTree(); - -// confirm that the tree was validated -Token min = validator.tree.partitioner().getMinimumToken(); -assert validator.tree.hash(new RangeToken(min, min)) != null; -} - -@Test -public void testValidatorAdd() throws Throwable -{ -Validator validator = new Validator(request); -IPartitioner part = validator.tree.partitioner(); -Token mid = part.midpoint(local_range.left, local_range.right); -validator.prepare(store); - -// add a row -validator.add(new PrecompactedRow(new DecoratedKey(mid, ByteBufferUtil.bytes(inconceivable!)), - TreeMapBackedSortedColumns.factory.create(Schema.instance.getCFMetaData(tablename, cfname; -validator.completeTree(); - -// confirm that the tree was validated -assert validator.tree.hash(local_range) != null; -} - -@Test public void testGetNeighborsPlusOne() throws Throwable { // generate rf+1 nodes, and ensure that all nodes are returned @@ -253,44 +202,6 @@ public abstract class AntiEntropyServiceTestAbstract extends SchemaLoader
git commit: Changelog fix
Updated Branches: refs/heads/trunk eb4fa4a62 - ddc288bcd Changelog fix Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ddc288bc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ddc288bc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ddc288bc Branch: refs/heads/trunk Commit: ddc288bcd7a64fa6dec9eec181470eedcbbf38eb Parents: eb4fa4a Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jun 25 19:02:55 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jun 25 19:02:55 2013 +0200 -- CHANGES.txt | 1 + 1 file changed, 1 insertion(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ddc288bc/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 2c43d57..c8cb019 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -63,6 +63,7 @@ * Streaming 2.0 (CASSANDRA-5286) * Conditional create/drop ks/table/index statements in CQL3 (CASSANDRA-2737) * more pre-table creation property validation (CASSANDRA-5693) + * Redesign repair messages (CASSANDRA-5426) 1.2.7
[jira] [Commented] (CASSANDRA-5591) Windows failure renaming LCS json.
[ https://issues.apache.org/jira/browse/CASSANDRA-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693165#comment-13693165 ] Marcus Eriksson commented on CASSANDRA-5591: [~steve_p] what is the recommended way to do it on windows? simple sleep/retry? Windows failure renaming LCS json. -- Key: CASSANDRA-5591 URL: https://issues.apache.org/jira/browse/CASSANDRA-5591 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.4 Environment: Windows Reporter: Jeremiah Jordan Had someone report that on Windows, under load, the LCS json file sometimes fails to be renamed. {noformat} ERROR [CompactionExecutor:1] 2013-05-23 14:43:55,848 CassandraDaemon.java (line 174) Exception in thread Thread[CompactionExecutor:1,1,main] java.lang.RuntimeException: Failed to rename C:\development\tools\DataStax Community\data\data\zzz\zzz\zzz.json to C:\development\tools\DataStax Community\data\data\zzz\zzz\zzz-old.json at org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133) at org.apache.cassandra.db.compaction.LeveledManifest.serialize(LeveledManifest.java:617) at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:229) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:155) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:410) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:223) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:991) at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:230) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:188) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5692) Race condition in detecting version on a mixed 1.1/1.2 cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Bossa updated CASSANDRA-5692: Attachment: 5692-0006.patch Rebased patch on top of cassandra-1.2 branch. Race condition in detecting version on a mixed 1.1/1.2 cluster -- Key: CASSANDRA-5692 URL: https://issues.apache.org/jira/browse/CASSANDRA-5692 Project: Cassandra Issue Type: Bug Affects Versions: 1.1.9, 1.2.5 Reporter: Sergio Bossa Priority: Minor Attachments: 5692-0005.patch, 5692-0006.patch On a mixed 1.1 / 1.2 cluster, starting 1.2 nodes fires sometimes a race condition in version detection, where the 1.2 node wrongly detects version 6 for a 1.1 node. It works as follows: 1) The just started 1.2 node quickly opens an OutboundTcpConnection toward a 1.1 node before receiving any messages from the latter. 2) Given the version is correctly detected only when the first message is received, the version is momentarily set at 6. 3) This opens an OutboundTcpConnection from 1.2 to 1.1 at version 6, which gets stuck in the connect() method. Later, the version is correctly fixed, but all outbound connections from 1.2 to 1.1 are stuck at this point. Evidence from 1.2 logs: TRACE 13:48:31,133 Assuming current protocol version for /127.0.0.2 DEBUG 13:48:37,837 Setting version 5 for /127.0.0.2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5489) Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA
[ https://issues.apache.org/jira/browse/CASSANDRA-5489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-5489: Attachment: 5489.txt I had forgotten this. Anyway, attaching simple patch for trunk that 1) add a sanity check to validate that we do have either all or no aliases and that 2) write an empty json list in the case we have none. Fix 2.0 key and column aliases serialization and cqlsh DESC SCHEMA -- Key: CASSANDRA-5489 URL: https://issues.apache.org/jira/browse/CASSANDRA-5489 Project: Cassandra Issue Type: Bug Components: Core, Tools Affects Versions: 2.0 beta 1 Reporter: Aleksey Yeschenko Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.0 beta 1 Attachments: 5489-1.2.txt, 5489.txt, 5489.txt CASSANDRA-5125 made a slight change to how key_aliases and column_aliases are serialized in schema. Prior to that we never kept nulls in the the json pseudo-lists. This does break cqlsh and probably breaks 1.2 nodes receiving such migrations as well. The patch reverts this behavior and also slightly modifies cqlsh itself to ignore non-regular columns from system.schema_columns table. This patch breaks nothing, since 2.0 already handles 1.2 non-null padded alias lists. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: Update messages for serializations tests
Updated Branches: refs/heads/trunk ddc288bcd - a607ee077 Update messages for serializations tests Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a607ee07 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a607ee07 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a607ee07 Branch: refs/heads/trunk Commit: a607ee0773f3b078471c923b0276353cccab0cb6 Parents: ddc288b Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jun 25 19:31:56 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jun 25 19:31:56 2013 +0200 -- .../serialization/2.0/db.RangeSliceCommand.bin | Bin 801 - 801 bytes test/data/serialization/2.0/db.Row.bin | Bin 587 - 587 bytes test/data/serialization/2.0/db.RowMutation.bin | Bin 3599 - 3599 bytes .../serialization/2.0/gms.EndpointState.bin | Bin 73 - 73 bytes .../2.0/service.ValidationComplete.bin | Bin 1063 - 1279 bytes .../serialization/2.0/utils.BloomFilter.bin | Bin 2500016 - 2500016 bytes 6 files changed, 0 insertions(+), 0 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a607ee07/test/data/serialization/2.0/db.RangeSliceCommand.bin -- diff --git a/test/data/serialization/2.0/db.RangeSliceCommand.bin b/test/data/serialization/2.0/db.RangeSliceCommand.bin index 81a0c02..099e429 100644 Binary files a/test/data/serialization/2.0/db.RangeSliceCommand.bin and b/test/data/serialization/2.0/db.RangeSliceCommand.bin differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/a607ee07/test/data/serialization/2.0/db.Row.bin -- diff --git a/test/data/serialization/2.0/db.Row.bin b/test/data/serialization/2.0/db.Row.bin index bfc671d..c699448 100644 Binary files a/test/data/serialization/2.0/db.Row.bin and b/test/data/serialization/2.0/db.Row.bin differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/a607ee07/test/data/serialization/2.0/db.RowMutation.bin -- diff --git a/test/data/serialization/2.0/db.RowMutation.bin b/test/data/serialization/2.0/db.RowMutation.bin index a659ecd..c9fcc67 100644 Binary files a/test/data/serialization/2.0/db.RowMutation.bin and b/test/data/serialization/2.0/db.RowMutation.bin differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/a607ee07/test/data/serialization/2.0/gms.EndpointState.bin -- diff --git a/test/data/serialization/2.0/gms.EndpointState.bin b/test/data/serialization/2.0/gms.EndpointState.bin index ffbb00d..cd89893 100644 Binary files a/test/data/serialization/2.0/gms.EndpointState.bin and b/test/data/serialization/2.0/gms.EndpointState.bin differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/a607ee07/test/data/serialization/2.0/service.ValidationComplete.bin -- diff --git a/test/data/serialization/2.0/service.ValidationComplete.bin b/test/data/serialization/2.0/service.ValidationComplete.bin index bc633bc..0c8d7be 100644 Binary files a/test/data/serialization/2.0/service.ValidationComplete.bin and b/test/data/serialization/2.0/service.ValidationComplete.bin differ http://git-wip-us.apache.org/repos/asf/cassandra/blob/a607ee07/test/data/serialization/2.0/utils.BloomFilter.bin -- diff --git a/test/data/serialization/2.0/utils.BloomFilter.bin b/test/data/serialization/2.0/utils.BloomFilter.bin index 63e561a..12f72f5 100644 Binary files a/test/data/serialization/2.0/utils.BloomFilter.bin and b/test/data/serialization/2.0/utils.BloomFilter.bin differ
[jira] [Commented] (CASSANDRA-5591) Windows failure renaming LCS json.
[ https://issues.apache.org/jira/browse/CASSANDRA-5591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693238#comment-13693238 ] Steve Peters commented on CASSANDRA-5591: - There's a few ways around it. I encountered this frequently in a test suites that created and deleted a file with the same file name repeatedly. Adding some randomness to the filename worked in that case. If this file can't really have it's name changed at all, I would go with a sleep/retry. Windows failure renaming LCS json. -- Key: CASSANDRA-5591 URL: https://issues.apache.org/jira/browse/CASSANDRA-5591 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.4 Environment: Windows Reporter: Jeremiah Jordan Had someone report that on Windows, under load, the LCS json file sometimes fails to be renamed. {noformat} ERROR [CompactionExecutor:1] 2013-05-23 14:43:55,848 CassandraDaemon.java (line 174) Exception in thread Thread[CompactionExecutor:1,1,main] java.lang.RuntimeException: Failed to rename C:\development\tools\DataStax Community\data\data\zzz\zzz\zzz.json to C:\development\tools\DataStax Community\data\data\zzz\zzz\zzz-old.json at org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:133) at org.apache.cassandra.db.compaction.LeveledManifest.serialize(LeveledManifest.java:617) at org.apache.cassandra.db.compaction.LeveledManifest.promote(LeveledManifest.java:229) at org.apache.cassandra.db.compaction.LeveledCompactionStrategy.handleNotification(LeveledCompactionStrategy.java:155) at org.apache.cassandra.db.DataTracker.notifySSTablesChanged(DataTracker.java:410) at org.apache.cassandra.db.DataTracker.replaceCompactedSSTables(DataTracker.java:223) at org.apache.cassandra.db.ColumnFamilyStore.replaceCompactedSSTables(ColumnFamilyStore.java:991) at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:230) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:58) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:188) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) at java.lang.Thread.run(Thread.java:662) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693254#comment-13693254 ] Jonathan Ellis commented on CASSANDRA-5661: --- This has bitten someone in production now... LCS w/ 4000 sstables, using 2GB of heap (16k CRAR buffers). Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693270#comment-13693270 ] Pavel Yaskevich commented on CASSANDRA-5661: I'm starting to think that it would be just easier for everybody to just start using mmap buffers for compressed buffers all the time instead before we make caching too complex. I will start working in that direction to see if there is any promise in that. Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693275#comment-13693275 ] Jeremiah Jordan commented on CASSANDRA-5661: Saw some issues caused by these never getting cleared out. On a cluster using LCS CompressedRandomAccessReader objects are using 2 GB of heap space per node. Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5665) Gossiper.handleMajorStateChange can lose existing node ApplicationState
[ https://issues.apache.org/jira/browse/CASSANDRA-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-5665: --- Fix Version/s: (was: 1.2.7) (was: 2.0 beta 1) Gossiper.handleMajorStateChange can lose existing node ApplicationState --- Key: CASSANDRA-5665 URL: https://issues.apache.org/jira/browse/CASSANDRA-5665 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Labels: gossip, upgrade Attachments: 5665-v1.diff, 5665-v2.diff Dovetailing on CASSANDRA-5660, I discovered that further along during an upgrade, when more nodes are on the new major version, a node the previous version can get passed some incomplete Gossip info about another, already upgraded node, and the older node drops AppStat info about that node. I think what happens is that a 1.1 node (older rev) gets gossip info from a 1.2 node (A), which includes incomplete (lacking some AppState data) gossip info about another 1.2 node (B). The 1.1 node, which has marked incorrectly kicked node B out of gossip due to the bug described in #5660, then takes that incomplete node B info and wholesale replaces any previous known state about node B in Gossiper.handleMajorStateChanged. Thus, if we previously had DC/RACK info, it'll get dropped as part of the endpointStateMap.put(endpointstate). When the data being pased is incomplete, 1.1 will start referencing node B and gets into the NPE situation in #5498. Anecdotally, this bad state is short-lived, less than a few minutes, even as short as ten seconds, until gossip catches up and properly propagates the AppState data. Furthermore, when upgrading a two datacenter, 48 node cluster, it only occurred on two nodes for less than a minute each. Thus, the scope seems limited but can occur. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5665) Gossiper.handleMajorStateChange can lose existing node ApplicationState
[ https://issues.apache.org/jira/browse/CASSANDRA-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693284#comment-13693284 ] Jason Brown commented on CASSANDRA-5665: After discussion on IRC, decided to the brakes on this one as it seems like we're getting into dangerous territory mucking with Endpoint's AppState like that. I'll revisit this ticket again in the future to see if it's still a problem. Gossiper.handleMajorStateChange can lose existing node ApplicationState --- Key: CASSANDRA-5665 URL: https://issues.apache.org/jira/browse/CASSANDRA-5665 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.5 Reporter: Jason Brown Assignee: Jason Brown Priority: Minor Labels: gossip, upgrade Attachments: 5665-v1.diff, 5665-v2.diff Dovetailing on CASSANDRA-5660, I discovered that further along during an upgrade, when more nodes are on the new major version, a node the previous version can get passed some incomplete Gossip info about another, already upgraded node, and the older node drops AppStat info about that node. I think what happens is that a 1.1 node (older rev) gets gossip info from a 1.2 node (A), which includes incomplete (lacking some AppState data) gossip info about another 1.2 node (B). The 1.1 node, which has marked incorrectly kicked node B out of gossip due to the bug described in #5660, then takes that incomplete node B info and wholesale replaces any previous known state about node B in Gossiper.handleMajorStateChanged. Thus, if we previously had DC/RACK info, it'll get dropped as part of the endpointStateMap.put(endpointstate). When the data being pased is incomplete, 1.1 will start referencing node B and gets into the NPE situation in #5498. Anecdotally, this bad state is short-lived, less than a few minutes, even as short as ten seconds, until gossip catches up and properly propagates the AppState data. Furthermore, when upgrading a two datacenter, 48 node cluster, it only occurred on two nodes for less than a minute each. Thus, the scope seems limited but can occur. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[3/3] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2c271e2c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2c271e2c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2c271e2c Branch: refs/heads/trunk Commit: 2c271e2c0d957b6ebfcea4dcd70f3c8b8c6cf0cd Parents: a607ee0 a6ca5d4 Author: Brandon Williams brandonwilli...@apache.org Authored: Tue Jun 25 14:18:53 2013 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Tue Jun 25 14:18:53 2013 -0500 -- CHANGES.txt | 2 + src/java/org/apache/cassandra/gms/Gossiper.java | 46 +--- 2 files changed, 23 insertions(+), 25 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c271e2c/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2c271e2c/src/java/org/apache/cassandra/gms/Gossiper.java -- diff --cc src/java/org/apache/cassandra/gms/Gossiper.java index 46f19f5,efa9865..a74da02 --- a/src/java/org/apache/cassandra/gms/Gossiper.java +++ b/src/java/org/apache/cassandra/gms/Gossiper.java @@@ -874,28 -871,20 +874,20 @@@ public class Gossiper implements IFailu if (logger.isTraceEnabled()) logger.trace(Updating heartbeat state generation to + remoteGeneration + from + localGeneration + for + ep); // major state change will handle the update by inserting the remote state directly - copyNewerApplicationStates(remoteState, localEpStatePtr); handleMajorStateChange(ep, remoteState); } -else if ( remoteGeneration == localGeneration ) // generation has not changed, apply new states +else if (remoteGeneration == localGeneration) // generation has not changed, apply new states { /* find maximum state */ int localMaxVersion = getMaxEndpointStateVersion(localEpStatePtr); int remoteMaxVersion = getMaxEndpointStateVersion(remoteState); -if ( remoteMaxVersion localMaxVersion ) +if (remoteMaxVersion localMaxVersion) { - if (logger.isTraceEnabled()) - { - logger.trace(Updating heartbeat state version to + remoteState.getHeartBeatState().getHeartBeatVersion() + - from + localEpStatePtr.getHeartBeatState().getHeartBeatVersion() + for + ep); - } - localEpStatePtr.setHeartBeatState(remoteState.getHeartBeatState()); - MapApplicationState, VersionedValue merged = copyNewerApplicationStates(localEpStatePtr, remoteState); - for (EntryApplicationState, VersionedValue appState : merged.entrySet()) - doNotifications(ep, appState.getKey(), appState.getValue()); + // apply states, but do not notify since there is no major change + applyNewStates(ep, localEpStatePtr, remoteState); } else if (logger.isTraceEnabled()) - logger.trace(Ignoring remote version + remoteMaxVersion + = + localMaxVersion + for + ep); + logger.trace(Ignoring remote version + remoteMaxVersion + = + localMaxVersion + for + ep); if (!localEpStatePtr.isAlive() !isDeadState(localEpStatePtr)) // unless of course, it was dead markAlive(ep, localEpStatePtr); }
[1/3] git commit: Revert #5665 (b7e13b89c265c28acfb624a984b97a06a837c3ea) due to tests failures
Updated Branches: refs/heads/trunk a607ee077 - 2c271e2c0 Revert #5665 (b7e13b89c265c28acfb624a984b97a06a837c3ea) due to tests failures Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2d90eb65 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2d90eb65 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2d90eb65 Branch: refs/heads/trunk Commit: 2d90eb65bffd5787ff77403a4f3bc05605cfcd5a Parents: 81619fe Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jun 25 09:56:14 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jun 25 09:56:14 2013 +0200 -- src/java/org/apache/cassandra/gms/Gossiper.java | 46 +--- 1 file changed, 21 insertions(+), 25 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2d90eb65/src/java/org/apache/cassandra/gms/Gossiper.java -- diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java b/src/java/org/apache/cassandra/gms/Gossiper.java index b629824..efa9865 100644 --- a/src/java/org/apache/cassandra/gms/Gossiper.java +++ b/src/java/org/apache/cassandra/gms/Gossiper.java @@ -871,7 +871,6 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean if (logger.isTraceEnabled()) logger.trace(Updating heartbeat state generation to + remoteGeneration + from + localGeneration + for + ep); // major state change will handle the update by inserting the remote state directly -copyNewerApplicationStates(remoteState, localEpStatePtr); handleMajorStateChange(ep, remoteState); } else if ( remoteGeneration == localGeneration ) // generation has not changed, apply new states @@ -881,18 +880,11 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean int remoteMaxVersion = getMaxEndpointStateVersion(remoteState); if ( remoteMaxVersion localMaxVersion ) { -if (logger.isTraceEnabled()) -{ -logger.trace(Updating heartbeat state version to + remoteState.getHeartBeatState().getHeartBeatVersion() + - from + localEpStatePtr.getHeartBeatState().getHeartBeatVersion() + for + ep); -} - localEpStatePtr.setHeartBeatState(remoteState.getHeartBeatState()); -MapApplicationState, VersionedValue merged = copyNewerApplicationStates(localEpStatePtr, remoteState); -for (EntryApplicationState, VersionedValue appState : merged.entrySet()) -doNotifications(ep, appState.getKey(), appState.getValue()); +// apply states, but do not notify since there is no major change +applyNewStates(ep, localEpStatePtr, remoteState); } else if (logger.isTraceEnabled()) -logger.trace(Ignoring remote version + remoteMaxVersion + = + localMaxVersion + for + ep); +logger.trace(Ignoring remote version + remoteMaxVersion + = + localMaxVersion + for + ep); if (!localEpStatePtr.isAlive() !isDeadState(localEpStatePtr)) // unless of course, it was dead markAlive(ep, localEpStatePtr); } @@ -911,24 +903,28 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean } } -private MapApplicationState, VersionedValue copyNewerApplicationStates(EndpointState toState, EndpointState fromState) +private void applyNewStates(InetAddress addr, EndpointState localState, EndpointState remoteState) { -MapApplicationState, VersionedValue merged = new HashMapApplicationState, VersionedValue(); -for (EntryApplicationState, VersionedValue fromEntry : fromState.getApplicationStateMap().entrySet()) +// don't assert here, since if the node restarts the version will go back to zero +int oldVersion = localState.getHeartBeatState().getHeartBeatVersion(); + +localState.setHeartBeatState(remoteState.getHeartBeatState()); +if (logger.isTraceEnabled()) +logger.trace(Updating heartbeat state version to + localState.getHeartBeatState().getHeartBeatVersion() + from + oldVersion + for + addr + ...); + +// we need to make two loops here, one to apply, then another to notify, this way all states in an update are present and current when the notifications are received
[2/3] git commit: Changelog update
Changelog update Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a6ca5d49 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a6ca5d49 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a6ca5d49 Branch: refs/heads/trunk Commit: a6ca5d496facf79c187310e81b3eeba3e6bc4b43 Parents: 2d90eb6 Author: Sylvain Lebresne sylv...@datastax.com Authored: Tue Jun 25 09:58:10 2013 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Tue Jun 25 09:58:10 2013 +0200 -- CHANGES.txt | 7 ++- 1 file changed, 2 insertions(+), 5 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a6ca5d49/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index be0c1d0..2931916 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,8 +1,3 @@ -1.2.7 - * Fix ReadResponseSerializer.serializedSize() for digest reads (CASSANDRA-5476) - * allow sstable2json on 2i CFs (CASSANDRA-5694) - - 1.2.6 * Fix tracing when operation completes before all responses arrive (CASSANDRA-5668) * Fix cross-DC mutation forwarding (CASSANDRA-5632) @@ -39,6 +34,8 @@ * Gossiper incorrectly drops AppState for an upgrading node (CASSANDRA-5660) * Connection thrashing during multi-region ec2 during upgrade, due to messaging version (CASSANDRA-5669) * Avoid over reconnecting in EC2MRS (CASSANDRA-5678) + * Fix ReadResponseSerializer.serializedSize() for digest reads (CASSANDRA-5476) + * allow sstable2json on 2i CFs (CASSANDRA-5694) Merged from 1.1: * Remove buggy thrift max message length option (CASSANDRA-5529) * Fix NPE in Pig's widerow mode (CASSANDRA-5488)
[jira] [Updated] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cathy Daw updated CASSANDRA-5661: - Tester: dmeyer Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693385#comment-13693385 ] Alex Liu commented on CASSANDRA-5234: - The failed test is where there is filter for COUNT(columns) {code} -- filter to fully visible rows (no uuid columns) and dump visible = FILTER rows BY COUNT(columns) == 0; dump visible; {code} Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Assignee: Alex Liu Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234-2-1.2branch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[3/3] git commit: Ninja-fix native proto spec typo (v2)
Ninja-fix native proto spec typo (v2) Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3455f1b7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3455f1b7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3455f1b7 Branch: refs/heads/trunk Commit: 3455f1b76dfcfe61d973033a8042f827b2bc90f0 Parents: 8db42fe Author: Aleksey Yeschenko alek...@apache.org Authored: Wed Jun 26 00:10:57 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed Jun 26 00:10:57 2013 +0300 -- doc/native_protocol_v2.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3455f1b7/doc/native_protocol_v2.spec -- diff --git a/doc/native_protocol_v2.spec b/doc/native_protocol_v2.spec index 2cc771d..e0ac541 100644 --- a/doc/native_protocol_v2.spec +++ b/doc/native_protocol_v2.spec @@ -301,7 +301,7 @@ Table of Contents Prepare a query for later execution (through EXECUTE). The body consists of the CQL query to prepare as a [long string]. - The server will respond with a RESULT message with a `prepared` kind (0x3, + The server will respond with a RESULT message with a `prepared` kind (0x0004, see Section 4.2.5).
[1/3] git commit: Ninja-fix native proto spec typo
Updated Branches: refs/heads/trunk 2c271e2c0 - 3455f1b76 Ninja-fix native proto spec typo Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/40112ec9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/40112ec9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/40112ec9 Branch: refs/heads/trunk Commit: 40112ec953298de894a57d27895202488b3b5c3d Parents: a6ca5d4 Author: Aleksey Yeschenko alek...@apache.org Authored: Wed Jun 26 00:09:41 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed Jun 26 00:09:41 2013 +0300 -- doc/native_protocol.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/40112ec9/doc/native_protocol.spec -- diff --git a/doc/native_protocol.spec b/doc/native_protocol.spec index fb709e3..b7a1de5 100644 --- a/doc/native_protocol.spec +++ b/doc/native_protocol.spec @@ -273,7 +273,7 @@ Table of Contents Prepare a query for later execution (through EXECUTE). The body consists of the CQL query to prepare as a [long string]. - The server will respond with a RESULT message with a `prepared` kind (0x3, + The server will respond with a RESULT message with a `prepared` kind (0x0004, see Section 4.2.5).
[2/3] git commit: Merge branch 'cassandra-1.2' into trunk
Merge branch 'cassandra-1.2' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8db42fed Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8db42fed Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8db42fed Branch: refs/heads/trunk Commit: 8db42fed091c3b62d77c648c4f3666e443a68c48 Parents: 2c271e2 40112ec Author: Aleksey Yeschenko alek...@apache.org Authored: Wed Jun 26 00:10:26 2013 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Wed Jun 26 00:10:26 2013 +0300 -- doc/native_protocol_v1.spec | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8db42fed/doc/native_protocol_v1.spec -- diff --cc doc/native_protocol_v1.spec index fb709e3,000..b7a1de5 mode 100644,00..100644 --- a/doc/native_protocol_v1.spec +++ b/doc/native_protocol_v1.spec @@@ -1,635 -1,0 +1,635 @@@ + + CQL BINARY PROTOCOL v1 + + +Table of Contents + + 1. Overview + 2. Frame header +2.1. version +2.2. flags +2.3. stream +2.4. opcode +2.5. length + 3. Notations + 4. Messages +4.1. Requests + 4.1.1. STARTUP + 4.1.2. CREDENTIALS + 4.1.3. OPTIONS + 4.1.4. QUERY + 4.1.5. PREPARE + 4.1.6. EXECUTE + 4.1.7. REGISTER +4.2. Responses + 4.2.1. ERROR + 4.2.2. READY + 4.2.3. AUTHENTICATE + 4.2.4. SUPPORTED + 4.2.5. RESULT +4.2.5.1. Void +4.2.5.2. Rows +4.2.5.3. Set_keyspace +4.2.5.4. Prepared +4.2.5.5. Schema_change + 4.2.6. EVENT + 5. Compression + 6. Collection types + 7. Error codes + + +1. Overview + + The CQL binary protocol is a frame based protocol. Frames are defined as: + + 0 8162432 + +-+-+-+-+ + | version | flags | stream | opcode | + +-+-+-+-+ + |length | + +-+-+-+-+ + | | + .... body ... . + . . + . . + + + + The protocol is big-endian (network byte order). + + Each frame contains a fixed size header (8 bytes) followed by a variable size + body. The header is described in Section 2. The content of the body depends + on the header opcode value (the body can in particular be empty for some + opcode values). The list of allowed opcode is defined Section 2.3 and the + details of each corresponding message is described Section 4. + + The protocol distinguishes 2 types of frames: requests and responses. Requests + are those frame sent by the clients to the server, response are the ones sent + by the server. Note however that while communication are initiated by the + client with the server responding to request, the protocol may likely add + server pushes in the future, so responses does not obligatory come right after + a client request. + + Note to client implementors: clients library should always assume that the + body of a given frame may contain more data than what is described in this + document. It will however always be safe to ignore the remaining of the frame + body in such cases. The reason is that this may allow to sometimes extend the + protocol with optional features without needing to change the protocol + version. + + +2. Frame header + +2.1. version + + The version is a single byte that indicate both the direction of the message + (request or response) and the version of the protocol in use. The up-most bit + of version is used to define the direction of the message: 0 indicates a + request, 1 indicates a responses. This can be useful for protocol analyzers to + distinguish the nature of the packet from the direction which it is moving. + The rest of that byte is the protocol version (1 for the protocol defined in + this document). In other words, for this version of the protocol, version will + have one of: +0x01Request frame for this protocol version +0x81Response frame for this protocol version + + +2.2. flags + + Flags applying to this frame. The flags have the following meaning (described + by the mask that allow to select them): +0x01: Compression flag. If set, the frame body is compressed. The actual + compression to use should have been set up beforehand through the + Startup message (which thus
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693406#comment-13693406 ] Jonathan Ellis commented on CASSANDRA-5661: --- bq. I'm starting to think that it would be just easier for everybody to just start using mmap buffers for compressed buffers all the time Good idea, but probably too big a change for 1.2.7? Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693419#comment-13693419 ] Pavel Yaskevich commented on CASSANDRA-5661: As a brain dump, I see 3 ways to resolve this: 1. Introduce queue total memory limit and evict based on that, but there is no guarantee that we won't be evicting incorrect instances. 2. Introduce liveness per instance (e.g. 20-30 seconds) before evictor considers it as old, that solves the problem with #1 but are relying on eviction thread to be robust and run constantly, any delays in such manual GC could result in the same memory bloat as described in the issue. 3. Remove caching and go with mmap'ed segments instead, the problem with that is that we need to create direct byte buffer every time we decompress data which I'm not sure if could be GC'ed reliably, so for example, if particular JVM implementation only cleans up such buffers only on CMS or full GC process can effectively OOM because we are actually trying to avoid any significant GC activity as much as possible. I like #2 the most. Thoughts? Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-5234: Attachment: 5234-3-1.2branch.txt Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Assignee: Alex Liu Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234-2-1.2branch.txt, 5234-3-1.2branch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693444#comment-13693444 ] Alex Liu edited comment on CASSANDRA-5234 at 6/25/13 10:21 PM: --- It turns out to wide row schema issue which could be an error from previous version. 5234-3-1.2branch.txt is attached to fixd failing to run examples/pig/test was (Author: alexliu68): It turns out to wide row schema issue which could be an error from previous version. 5234-2-1.2branch.txt is attached to fixd failing to run examples/pig/test Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Assignee: Alex Liu Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234-2-1.2branch.txt, 5234-3-1.2branch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5234) Table created through CQL3 are not accessble to Pig 0.10
[ https://issues.apache.org/jira/browse/CASSANDRA-5234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693444#comment-13693444 ] Alex Liu commented on CASSANDRA-5234: - It turns out to wide row schema issue which could be an error from previous version. 5234-2-1.2branch.txt is attached to fixd failing to run examples/pig/test Table created through CQL3 are not accessble to Pig 0.10 Key: CASSANDRA-5234 URL: https://issues.apache.org/jira/browse/CASSANDRA-5234 Project: Cassandra Issue Type: Bug Components: Hadoop Affects Versions: 1.2.1 Environment: Red hat linux 5 Reporter: Shamim Ahmed Assignee: Alex Liu Fix For: 1.2.6 Attachments: 5234-1-1.2-patch.txt, 5234-1.2-patch.txt, 5234-2-1.2branch.txt, 5234-3-1.2branch.txt, 5234.tx Hi, i have faced a bug when creating table through CQL3 and trying to load data through pig 0.10 as follows: java.lang.RuntimeException: Column family 'abc' not found in keyspace 'XYZ' at org.apache.cassandra.hadoop.pig.CassandraStorage.initSchema(CassandraStorage.java:1112) at org.apache.cassandra.hadoop.pig.CassandraStorage.setLocation(CassandraStorage.java:615). This effects from Simple table to table with compound key. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4495) Don't tie client side use of AbstractType to JDBC
[ https://issues.apache.org/jira/browse/CASSANDRA-4495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Yeksigian updated CASSANDRA-4495: -- Attachment: 4495-v3.patch I like that name a lot more. Attached an updated version which renames to *Serializer, and renames the methods to (de)serialize. Don't tie client side use of AbstractType to JDBC - Key: CASSANDRA-4495 URL: https://issues.apache.org/jira/browse/CASSANDRA-4495 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Carl Yeksigian Priority: Minor Fix For: 2.0 Attachments: 4495.patch, 4495-v2.patch, 4495-v3.patch We currently expose the AbstractType to java clients that want to reuse them though the cql.jdbc.* classes. I think this shouldn't be tied to the JDBC standard. JDBC was make for SQL DB, which Cassandra is not (CQL is not SQL and will never be). Typically, there is a fair amount of the JDBC standard that cannot be implemented with C*, and there is a number of specificity of C* that are not in JDBC (typically the set and maps collections). So I propose to extract simple type classes with just a compose and decompose method (but without ties to jdbc, which would allow all the jdbc specific method those types have) in the purpose of exporting that in a separate jar for clients (we could put that in a org.apache.cassandra.type package for instance). We could then deprecate the jdbc classes with basically the same schedule than CQL2. Let me note that this is *not* saying there shouldn't be a JDBC driver for Cassandra. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693478#comment-13693478 ] Jonathan Ellis commented on CASSANDRA-5661: --- bq. Introduce queue total memory limit and evict based on that, but there is no guarantee that we won't be evicting incorrect instances. I don't follow -- surely LRU is a better heuristic than idle for N seconds as in #2? Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5700) compact storage value restriction message confusing
Dave Brosius created CASSANDRA-5700: --- Summary: compact storage value restriction message confusing Key: CASSANDRA-5700 URL: https://issues.apache.org/jira/browse/CASSANDRA-5700 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.7 Attachments: 5700.txt i have a compact storage value column name (user) the same as another column family name (user) and was getting the error Restricting the value of a compact CF (user) is not supported which was very confusing changed message to Restricting the value (user) of a compact CF is not supported (tackling the big problems) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5700) compact storage value restriction message confusing
[ https://issues.apache.org/jira/browse/CASSANDRA-5700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dave Brosius updated CASSANDRA-5700: Attachment: 5700.txt compact storage value restriction message confusing --- Key: CASSANDRA-5700 URL: https://issues.apache.org/jira/browse/CASSANDRA-5700 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.7 Attachments: 5700.txt i have a compact storage value column name (user) the same as another column family name (user) and was getting the error Restricting the value of a compact CF (user) is not supported which was very confusing changed message to Restricting the value (user) of a compact CF is not supported (tackling the big problems) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693556#comment-13693556 ] Pavel Yaskevich commented on CASSANDRA-5661: LRU is more of a replacement strategy when you have distinct objects and fixed upper limit (num items, memory), but in here we deal with mostly duplicate objects and the states where num of items hardly overgrows queue so the problem is not handling replacement but rather expiring items after some period of idling. I was thinking of doing something like ExpiringQueue with the one (or set of timers) similar to Guava Cache. Which solves the problem of [C]RAR instances being stuck in the cache for long periods of time even when read pattern have changed and they are not useful anymore. Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5700) compact storage value restriction message confusing
[ https://issues.apache.org/jira/browse/CASSANDRA-5700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693599#comment-13693599 ] Jonathan Ellis commented on CASSANDRA-5700: --- I'm a little confused; what is a column restriction? compact storage value restriction message confusing --- Key: CASSANDRA-5700 URL: https://issues.apache.org/jira/browse/CASSANDRA-5700 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.7 Attachments: 5700.txt i have a compact storage value column name (user) the same as another column family name (user) and was getting the error Restricting the value of a compact CF (user) is not supported which was very confusing changed message to Restricting the value (user) of a compact CF is not supported (tackling the big problems) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693600#comment-13693600 ] Jonathan Ellis commented on CASSANDRA-5661: --- It still looks like an LRU problem to me: I have room for N objects, when I try to allocate N+1 I throw away the LRU. This means we will use more memory than a timer approach if N is actually more than we need, but it will do better than timers if we are crunched for space, which seems like the more important scenario to optimize. It also means that we have a very clear upper bound on memory use, rather than depending on workload, which history shows is tough for users to tune. Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693612#comment-13693612 ] Pavel Yaskevich commented on CASSANDRA-5661: It seems like we are trying to address different problems of what is in description to the ticket and what Jeremia pointed out. Let me describe what I'm trying to solve: When reading from multiple SSTables for a while and then pattern changes and load is switched to the different subset of SSTables, previous [C]RAR instances are returned to the appropriate queues and stuck there until each SSTable is deallocated (by compaction) which creates memory pressure on stale workloads or when compaction is running behind. LRU could solve that problem when we have limit on total amount of memory that we can use so it would start kicking in only after we reach that limit and create a jitter in the queue and processing latencies. What I propose adds minimal booking overhead per queue and expires items quicker than LRU and more precise, also I'm not really worried about max number of items in the queue per SSTable as it's organically limited to the number of concurrent readers. Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5700) compact storage value restriction message confusing
[ https://issues.apache.org/jira/browse/CASSANDRA-5700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693618#comment-13693618 ] Dave Brosius commented on CASSANDRA-5700: - I believe it just means that when you are using compact storage, you are not allowed to use the 'value' column in the where clause in cql, (yet at least). compact storage value restriction message confusing --- Key: CASSANDRA-5700 URL: https://issues.apache.org/jira/browse/CASSANDRA-5700 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.5 Reporter: Dave Brosius Assignee: Dave Brosius Priority: Trivial Fix For: 1.2.7 Attachments: 5700.txt i have a compact storage value column name (user) the same as another column family name (user) and was getting the error Restricting the value of a compact CF (user) is not supported which was very confusing changed message to Restricting the value (user) of a compact CF is not supported (tackling the big problems) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5661) Discard pooled readers for cold data
[ https://issues.apache.org/jira/browse/CASSANDRA-5661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13693631#comment-13693631 ] Jonathan Ellis commented on CASSANDRA-5661: --- LCS makes number of sstables x concurrent readers a problem. Discard pooled readers for cold data Key: CASSANDRA-5661 URL: https://issues.apache.org/jira/browse/CASSANDRA-5661 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.1 Reporter: Jonathan Ellis Assignee: Pavel Yaskevich Fix For: 1.2.7 Reader pooling was introduced in CASSANDRA-4942 but pooled RandomAccessReaders are never cleaned up until the SSTableReader is closed. So memory use is the worst case simultaneous RAR we had open for this file, forever. We should introduce a global limit on how much memory to use for RAR, and evict old ones. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira