[jira] [Commented] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124620#comment-15124620 ] Paulo Motta commented on CASSANDRA-11030: - ps: always registering cp65001 as utf-8 alias on py<3.3 on windows for simplicity. committer: 2.2 patch merges cleanly upwards. > utf-8 characters incorrectly displayed/inserted on cqlsh on Windows > --- > > Key: CASSANDRA-11030 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11030 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Labels: cqlsh, windows > > {noformat} > C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat > --encoding utf-8 > Connected to test at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4] > Use HELP for help. > cqlsh> INSERT INTO bla.test (bla ) VALUES ('não') ; > cqlsh> select * from bla.test; > bla > - > n?o > (1 rows) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11073) Cannot contact other nodes on Windows 7 ccm
[ https://issues.apache.org/jira/browse/CASSANDRA-11073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124591#comment-15124591 ] Paulo Motta commented on CASSANDRA-11073: - Thanks for testing this [~jkni]! It would be nice to test in some other win7 box, and if the problem is present I propose we enable the {{setOutboundBindAny}} flag on Windows 7 ccm hosts to avoid this. I tried investigating but didn't reach a conclusion and don't think it's worth investing much time on this as it seems like a ccm-on-windows-specific problem. > Cannot contact other nodes on Windows 7 ccm > --- > > Key: CASSANDRA-11073 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11073 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: windows 7 >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Labels: windows > > Before CASSANDRA-9309 was fixed the {{OutboundTcpConnectionPool}} did not > bind the client socket to a specific ip/port, so the Windows kernel always > picked {{127.0.0.1:random_port}} as client socket address for ccm nodes, > regardless of the {{listen_address}} value. > After fixing CASSANDRA-9309 the {{OutboundTcpConnectionPool}} now binds > outgoing client sockets to {{listen_address:random_port}}. > So any ccm cluster with more than one node will bind client sockets to > {{127.0.0.n}} where n is the node id. > However, the nodes cannot contact each other because connections remain in > the {{SYN_SENT}} state on Windows 7, as shown by netstats: > {noformat} > TCP127.0.0.2:50908127.0.0.1:7000 SYN_SENT > {noformat} > This bug is preventing the execution of dtests on Windows 7, and was also > experienced by [~Stefania]. > I suspect its a configuration/environment problem, but firewall and group > policies are disabled. The funny thing is that it does not happen on cassci, > but afaik there are no Windows 7 nodes there > Commenting [this > line|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/OutboundTcpConnectionPool.java#L139] > fixes the issue, but it's definitely not a solution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally
[ https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124582#comment-15124582 ] Paulo Motta edited comment on CASSANDRA-10938 at 1/30/16 1:27 AM: -- Couldn't reproduce the first or the second problems with JFR enabled. Maybe it's some rare environmental/consistency problem, so If we can't reproduce easily I propose we commit as is and observe behavior on CI. Will trigger a few more runs to see if it happens again. was (Author: pauloricardomg): Couldn't reproduce the first or the second problems with JFR enabled. Maybe it's some rare environmental/consistency problem, so If we can't reproduce easily I propose we commit as is and observe behavior on CI. Will trigger running a few more runs to see if it happens again. > test_bulk_round_trip_blogposts is failing occasionally > -- > > Key: CASSANDRA-10938 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10938 > Project: Cassandra > Issue Type: Sub-task > Components: Tools >Reporter: Stefania >Assignee: Stefania > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, > node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr > > > We get timeouts occasionally that cause the number of records to be incorrect: > http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally
[ https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124582#comment-15124582 ] Paulo Motta commented on CASSANDRA-10938: - Couldn't reproduce the first or the second problems with JFR enabled. Maybe it's some rare environmental/consistency problem, so If we can't reproduce easily I propose we commit as is and observe behavior on CI. Will trigger running a few more runs to see if it happens again. > test_bulk_round_trip_blogposts is failing occasionally > -- > > Key: CASSANDRA-10938 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10938 > Project: Cassandra > Issue Type: Sub-task > Components: Tools >Reporter: Stefania >Assignee: Stefania > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, > node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr > > > We get timeouts occasionally that cause the number of records to be incorrect: > http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124576#comment-15124576 ] Paulo Motta commented on CASSANDRA-11030: - bq. I think we need to move the codec registration further up, I just stumbled into this: Done, also fixed commit message, thanks! bq. but it fails on my newer laptop running 2.7.11 Tested new version on 2.7.10 as well as 2.7.11, and it works. Could you try this new version with {{chcp 65001}} before launching cqlsh and {{--encoding utf8}} ? If it works like this but doesn't work without {{--encoding utf8}}, I think it will be dependent if the system default encoding supports or not utf-8 characters, so I think we can leave it like this as there is a simple workaround. Resubmitted tests, please mark as ready to commit if you're satisfied. > utf-8 characters incorrectly displayed/inserted on cqlsh on Windows > --- > > Key: CASSANDRA-11030 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11030 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Labels: cqlsh, windows > > {noformat} > C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat > --encoding utf-8 > Connected to test at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4] > Use HELP for help. > cqlsh> INSERT INTO bla.test (bla ) VALUES ('não') ; > cqlsh> select * from bla.test; > bla > - > n?o > (1 rows) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124576#comment-15124576 ] Paulo Motta edited comment on CASSANDRA-11030 at 1/30/16 1:22 AM: -- bq. I think we need to move the codec registration further up, I just stumbled into this: Done, also fixed commit message and some other minor style nits, thanks! bq. but it fails on my newer laptop running 2.7.11 Tested new version on 2.7.10 as well as 2.7.11, and it works. Could you try this new version with {{chcp 65001}} before launching cqlsh and {{--encoding utf8}} ? If it works like this but doesn't work without {{--encoding utf8}}, I think it will be dependent if the system default encoding supports or not utf-8 characters, so I think we can leave it like this as there is a simple workaround. Resubmitted tests, please mark as ready to commit if you're satisfied. was (Author: pauloricardomg): bq. I think we need to move the codec registration further up, I just stumbled into this: Done, also fixed commit message, thanks! bq. but it fails on my newer laptop running 2.7.11 Tested new version on 2.7.10 as well as 2.7.11, and it works. Could you try this new version with {{chcp 65001}} before launching cqlsh and {{--encoding utf8}} ? If it works like this but doesn't work without {{--encoding utf8}}, I think it will be dependent if the system default encoding supports or not utf-8 characters, so I think we can leave it like this as there is a simple workaround. Resubmitted tests, please mark as ready to commit if you're satisfied. > utf-8 characters incorrectly displayed/inserted on cqlsh on Windows > --- > > Key: CASSANDRA-11030 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11030 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Labels: cqlsh, windows > > {noformat} > C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat > --encoding utf-8 > Connected to test at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4] > Use HELP for help. > cqlsh> INSERT INTO bla.test (bla ) VALUES ('não') ; > cqlsh> select * from bla.test; > bla > - > n?o > (1 rows) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10428) cqlsh: Include sub-second precision in timestamps by default
[ https://issues.apache.org/jira/browse/CASSANDRA-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-10428: Assignee: Stefania (was: Paulo Motta) > cqlsh: Include sub-second precision in timestamps by default > > > Key: CASSANDRA-10428 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10428 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: OSX 10.10.2 >Reporter: Chandran Anjur Narasimhan >Assignee: Stefania > Labels: cqlsh > Fix For: 3.x > > > Query with >= timestamp works. But the exact timestamp value is not working. > {noformat} > NCHAN-M-D0LZ:bin nchan$ ./cqlsh > Connected to CCC Multi-Region Cassandra Cluster at :. > [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3] > Use HELP for help. > cqlsh> > {noformat} > {panel:title=Schema|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> desc COLUMNFAMILY ez_task_result ; > CREATE TABLE ccc.ez_task_result ( > submissionid text, > ezid text, > name text, > time timestamp, > analyzed_index_root text, > ... > ... > PRIMARY KEY (submissionid, ezid, name, time) > {panel} > {panel:title=Working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> select submissionid, ezid, name, time, state, status, > translated_criteria_status from ez_task_result where > submissionid='760dd154670811e58c04005056bb6ff0' and > ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and > time>='2015-09-29 20:54:23-0700'; > submissionid | ezid | name > | time | state | status | > translated_criteria_status > --+--+--+--+---+-+ > 760dd154670811e58c04005056bb6ff0 | 760dd6de670811e594fc005056bb6ff0 | > run-sanities | 2015-09-29 20:54:23-0700 | EXECUTING | IN_PROGRESS | > run-sanities started > (1 rows) > cqlsh:ccc> > {panel} > {panel:title=Not > working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> select submissionid, ezid, name, time, state, status, > translated_criteria_status from ez_task_result where > submissionid='760dd154670811e58c04005056bb6ff0' and > ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and > time='2015-09-29 20:54:23-0700'; > submissionid | ezid | name | time | analyzed_index_root | analyzed_log_path > | clientid | end_time | jenkins_path | log_file_path | path_available | > path_to_task | required_for_overall_status | start_time | state | status | > translated_criteria_status | type > --+--+--+--+-+---+--+--+--+---++--+-++---+++-- > (0 rows) > cqlsh:ccc> > {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7950) Output of nodetool compactionstats and compactionhistory does not work well with long keyspace and column family names.
[ https://issues.apache.org/jira/browse/CASSANDRA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-7950: -- Reviewer: (was: Yuki Morishita) > Output of nodetool compactionstats and compactionhistory does not work well > with long keyspace and column family names. > - > > Key: CASSANDRA-7950 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7950 > Project: Cassandra > Issue Type: Improvement > Components: Tools > Environment: CentOS 5, 64bit, Oracle JDK 7, DSE >Reporter: Eugene >Assignee: Yuki Morishita >Priority: Minor > Labels: lhf > Fix For: 2.1.x > > Attachments: 7950.patch, nodetool-examples.txt > > > When running these commands: > nodetool compactionstats > nodetool compactionhistory > The output can be difficult to grok due to long keyspace names, column family > names, and long values. I have attached an example. > It's difficult for both humans and grep/sed/awk/perl to read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-7950) Output of nodetool compactionstats and compactionhistory does not work well with long keyspace and column family names.
[ https://issues.apache.org/jira/browse/CASSANDRA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita reassigned CASSANDRA-7950: - Assignee: Yuki Morishita (was: Michael Shuler) > Output of nodetool compactionstats and compactionhistory does not work well > with long keyspace and column family names. > - > > Key: CASSANDRA-7950 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7950 > Project: Cassandra > Issue Type: Improvement > Components: Tools > Environment: CentOS 5, 64bit, Oracle JDK 7, DSE >Reporter: Eugene >Assignee: Yuki Morishita >Priority: Minor > Labels: lhf > Fix For: 2.1.x > > Attachments: 7950.patch, nodetool-examples.txt > > > When running these commands: > nodetool compactionstats > nodetool compactionhistory > The output can be difficult to grok due to long keyspace names, column family > names, and long values. I have attached an example. > It's difficult for both humans and grep/sed/awk/perl to read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7950) Output of nodetool compactionstats and compactionhistory does not work well with long keyspace and column family names.
[ https://issues.apache.org/jira/browse/CASSANDRA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124553#comment-15124553 ] Yuki Morishita commented on CASSANDRA-7950: --- I created patch for trunk that introduces {{TableBuilder}} (similar to what we do now in {{compactionstats}}) and uses it for {{compactionhistory}}, {{compactionstats}}, and {{listsnapshots}}. ||branch||testall||dtest|| |[7950|https://github.com/yukim/cassandra/tree/7950]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-7950-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-7950-dtest/lastCompletedBuild/testReport/]| > Output of nodetool compactionstats and compactionhistory does not work well > with long keyspace and column family names. > - > > Key: CASSANDRA-7950 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7950 > Project: Cassandra > Issue Type: Improvement > Components: Tools > Environment: CentOS 5, 64bit, Oracle JDK 7, DSE >Reporter: Eugene >Assignee: Michael Shuler >Priority: Minor > Labels: lhf > Fix For: 2.1.x > > Attachments: 7950.patch, nodetool-examples.txt > > > When running these commands: > nodetool compactionstats > nodetool compactionhistory > The output can be difficult to grok due to long keyspace names, column family > names, and long values. I have attached an example. > It's difficult for both humans and grep/sed/awk/perl to read. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11098) system_distributed and system_traces keyspaces use hard-coded replication factors
[ https://issues.apache.org/jira/browse/CASSANDRA-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-11098: - Summary: system_distributed and system_traces keyspaces use hard-coded replication factors (was: system_distributed keyspace uses a hard-coded replication factor) > system_distributed and system_traces keyspaces use hard-coded replication > factors > - > > Key: CASSANDRA-11098 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11098 > Project: Cassandra > Issue Type: Bug >Reporter: Wei Deng >Priority: Minor > > We introduced system_distributed keyspace in C* 2.2 so that we can the save > repair histories and ancestors to a system keyspace (due to CASSANDRA-5839). > However, looks like it's hard-coding the replication factor to 3, according > to this line: > https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/SystemDistributedKeyspace.java#L103 > This may confuse some query operations against this keyspace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11098) system_distributed keyspace uses a hard-coded replication factor
[ https://issues.apache.org/jira/browse/CASSANDRA-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124548#comment-15124548 ] Wei Deng commented on CASSANDRA-11098: -- on the same token, system_traces keyspace has a similar problem. See this code: https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tracing/TraceKeyspace.java#L78 > system_distributed keyspace uses a hard-coded replication factor > > > Key: CASSANDRA-11098 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11098 > Project: Cassandra > Issue Type: Bug >Reporter: Wei Deng >Priority: Minor > > We introduced system_distributed keyspace in C* 2.2 so that we can the save > repair histories and ancestors to a system keyspace (due to CASSANDRA-5839). > However, looks like it's hard-coding the replication factor to 3, according > to this line: > https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/SystemDistributedKeyspace.java#L103 > This may confuse some query operations against this keyspace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11098) system_distributed and system_traces keyspaces use hard-coded replication factors
[ https://issues.apache.org/jira/browse/CASSANDRA-11098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-11098: - Description: We introduced system_distributed keyspace in C* 2.2 so that we can save the repair histories and ancestors to a system keyspace (due to CASSANDRA-5839). However, looks like it's hard-coding the replication factor to 3, according to this line: https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/SystemDistributedKeyspace.java#L103 This may confuse some query operations against this keyspace. was: We introduced system_distributed keyspace in C* 2.2 so that we can the save repair histories and ancestors to a system keyspace (due to CASSANDRA-5839). However, looks like it's hard-coding the replication factor to 3, according to this line: https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/SystemDistributedKeyspace.java#L103 This may confuse some query operations against this keyspace. > system_distributed and system_traces keyspaces use hard-coded replication > factors > - > > Key: CASSANDRA-11098 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11098 > Project: Cassandra > Issue Type: Bug >Reporter: Wei Deng >Priority: Minor > > We introduced system_distributed keyspace in C* 2.2 so that we can save the > repair histories and ancestors to a system keyspace (due to CASSANDRA-5839). > However, looks like it's hard-coding the replication factor to 3, according > to this line: > https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/SystemDistributedKeyspace.java#L103 > This may confuse some query operations against this keyspace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11098) system_distributed keyspace uses a hard-coded replication factor
Wei Deng created CASSANDRA-11098: Summary: system_distributed keyspace uses a hard-coded replication factor Key: CASSANDRA-11098 URL: https://issues.apache.org/jira/browse/CASSANDRA-11098 Project: Cassandra Issue Type: Bug Reporter: Wei Deng Priority: Minor We introduced system_distributed keyspace in C* 2.2 so that we can the save repair histories and ancestors to a system keyspace (due to CASSANDRA-5839). However, looks like it's hard-coding the replication factor to 3, according to this line: https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/SystemDistributedKeyspace.java#L103 This may confuse some query operations against this keyspace. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10428) cqlsh: Include sub-second precision in timestamps by default
[ https://issues.apache.org/jira/browse/CASSANDRA-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124368#comment-15124368 ] Paulo Motta commented on CASSANDRA-10428: - Tested on Windows and both tests are passing as well as CI. Code and dtests look good.Thanks Stefania! Thanks all, marking as ready to commit. > cqlsh: Include sub-second precision in timestamps by default > > > Key: CASSANDRA-10428 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10428 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: OSX 10.10.2 >Reporter: Chandran Anjur Narasimhan >Assignee: Paulo Motta > Labels: cqlsh > Fix For: 3.x > > > Query with >= timestamp works. But the exact timestamp value is not working. > {noformat} > NCHAN-M-D0LZ:bin nchan$ ./cqlsh > Connected to CCC Multi-Region Cassandra Cluster at :. > [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3] > Use HELP for help. > cqlsh> > {noformat} > {panel:title=Schema|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> desc COLUMNFAMILY ez_task_result ; > CREATE TABLE ccc.ez_task_result ( > submissionid text, > ezid text, > name text, > time timestamp, > analyzed_index_root text, > ... > ... > PRIMARY KEY (submissionid, ezid, name, time) > {panel} > {panel:title=Working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> select submissionid, ezid, name, time, state, status, > translated_criteria_status from ez_task_result where > submissionid='760dd154670811e58c04005056bb6ff0' and > ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and > time>='2015-09-29 20:54:23-0700'; > submissionid | ezid | name > | time | state | status | > translated_criteria_status > --+--+--+--+---+-+ > 760dd154670811e58c04005056bb6ff0 | 760dd6de670811e594fc005056bb6ff0 | > run-sanities | 2015-09-29 20:54:23-0700 | EXECUTING | IN_PROGRESS | > run-sanities started > (1 rows) > cqlsh:ccc> > {panel} > {panel:title=Not > working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> select submissionid, ezid, name, time, state, status, > translated_criteria_status from ez_task_result where > submissionid='760dd154670811e58c04005056bb6ff0' and > ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and > time='2015-09-29 20:54:23-0700'; > submissionid | ezid | name | time | analyzed_index_root | analyzed_log_path > | clientid | end_time | jenkins_path | log_file_path | path_available | > path_to_task | required_for_overall_status | start_time | state | status | > translated_criteria_status | type > --+--+--+--+-+---+--+--+--+---++--+-++---+++-- > (0 rows) > cqlsh:ccc> > {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-10428) cqlsh: Include sub-second precision in timestamps by default
[ https://issues.apache.org/jira/browse/CASSANDRA-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta reassigned CASSANDRA-10428: --- Assignee: Paulo Motta (was: Stefania) > cqlsh: Include sub-second precision in timestamps by default > > > Key: CASSANDRA-10428 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10428 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: OSX 10.10.2 >Reporter: Chandran Anjur Narasimhan >Assignee: Paulo Motta > Labels: cqlsh > Fix For: 3.x > > > Query with >= timestamp works. But the exact timestamp value is not working. > {noformat} > NCHAN-M-D0LZ:bin nchan$ ./cqlsh > Connected to CCC Multi-Region Cassandra Cluster at :. > [cqlsh 5.0.1 | Cassandra 2.1.7 | CQL spec 3.2.0 | Native protocol v3] > Use HELP for help. > cqlsh> > {noformat} > {panel:title=Schema|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> desc COLUMNFAMILY ez_task_result ; > CREATE TABLE ccc.ez_task_result ( > submissionid text, > ezid text, > name text, > time timestamp, > analyzed_index_root text, > ... > ... > PRIMARY KEY (submissionid, ezid, name, time) > {panel} > {panel:title=Working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> select submissionid, ezid, name, time, state, status, > translated_criteria_status from ez_task_result where > submissionid='760dd154670811e58c04005056bb6ff0' and > ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and > time>='2015-09-29 20:54:23-0700'; > submissionid | ezid | name > | time | state | status | > translated_criteria_status > --+--+--+--+---+-+ > 760dd154670811e58c04005056bb6ff0 | 760dd6de670811e594fc005056bb6ff0 | > run-sanities | 2015-09-29 20:54:23-0700 | EXECUTING | IN_PROGRESS | > run-sanities started > (1 rows) > cqlsh:ccc> > {panel} > {panel:title=Not > working|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE} > cqlsh:ccc> select submissionid, ezid, name, time, state, status, > translated_criteria_status from ez_task_result where > submissionid='760dd154670811e58c04005056bb6ff0' and > ezid='760dd6de670811e594fc005056bb6ff0' and name='run-sanities' and > time='2015-09-29 20:54:23-0700'; > submissionid | ezid | name | time | analyzed_index_root | analyzed_log_path > | clientid | end_time | jenkins_path | log_file_path | path_available | > path_to_task | required_for_overall_status | start_time | state | status | > translated_criteria_status | type > --+--+--+--+-+---+--+--+--+---++--+-++---+++-- > (0 rows) > cqlsh:ccc> > {panel} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/3] cassandra git commit: Avoid infinite loop if owned range is smaller than number of data directories
Avoid infinite loop if owned range is smaller than number of data directories Patch by marcuse; reviewed by Carl Yeksigian for CASSANDRA-11034 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bc8fa36 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bc8fa36 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bc8fa36 Branch: refs/heads/trunk Commit: 8bc8fa36907188440579aaf88b2bd397ec4dcf8c Parents: 573552c Author: Marcus Eriksson Authored: Fri Jan 29 10:13:55 2016 +0100 Committer: Marcus Eriksson Committed: Fri Jan 29 22:04:09 2016 +0100 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/dht/Splitter.java | 3 +++ 2 files changed, 5 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bc8fa36/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index f9af204..9d58926 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 3.3 + * Avoid infinite loop if owned range is smaller than number of + data dirs (CASSANDRA-11034) * Avoid bootstrap hanging when existing nodes have no data to stream (CASSANDRA-11010) Merged from 3.0: * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bc8fa36/src/java/org/apache/cassandra/dht/Splitter.java -- diff --git a/src/java/org/apache/cassandra/dht/Splitter.java b/src/java/org/apache/cassandra/dht/Splitter.java index 67b578d..4268e83 100644 --- a/src/java/org/apache/cassandra/dht/Splitter.java +++ b/src/java/org/apache/cassandra/dht/Splitter.java @@ -51,6 +51,9 @@ public abstract class Splitter totalTokens = totalTokens.add(right.subtract(valueForToken(r.left))); } BigInteger perPart = totalTokens.divide(BigInteger.valueOf(parts)); +// the range owned is so tiny we can't split it: +if (perPart.equals(BigInteger.ZERO)) +return Collections.singletonList(partitioner.getMaximumToken()); if (dontSplitRanges) return splitOwnedRangesNoPartialRanges(localRanges, perPart, parts);
[1/3] cassandra git commit: Avoid infinite loop if owned range is smaller than number of data directories
Repository: cassandra Updated Branches: refs/heads/cassandra-3.3 573552c80 -> 8bc8fa369 refs/heads/trunk 41f5d2279 -> c7829a0a6 Avoid infinite loop if owned range is smaller than number of data directories Patch by marcuse; reviewed by Carl Yeksigian for CASSANDRA-11034 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8bc8fa36 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8bc8fa36 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8bc8fa36 Branch: refs/heads/cassandra-3.3 Commit: 8bc8fa36907188440579aaf88b2bd397ec4dcf8c Parents: 573552c Author: Marcus Eriksson Authored: Fri Jan 29 10:13:55 2016 +0100 Committer: Marcus Eriksson Committed: Fri Jan 29 22:04:09 2016 +0100 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/dht/Splitter.java | 3 +++ 2 files changed, 5 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bc8fa36/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index f9af204..9d58926 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 3.3 + * Avoid infinite loop if owned range is smaller than number of + data dirs (CASSANDRA-11034) * Avoid bootstrap hanging when existing nodes have no data to stream (CASSANDRA-11010) Merged from 3.0: * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly http://git-wip-us.apache.org/repos/asf/cassandra/blob/8bc8fa36/src/java/org/apache/cassandra/dht/Splitter.java -- diff --git a/src/java/org/apache/cassandra/dht/Splitter.java b/src/java/org/apache/cassandra/dht/Splitter.java index 67b578d..4268e83 100644 --- a/src/java/org/apache/cassandra/dht/Splitter.java +++ b/src/java/org/apache/cassandra/dht/Splitter.java @@ -51,6 +51,9 @@ public abstract class Splitter totalTokens = totalTokens.add(right.subtract(valueForToken(r.left))); } BigInteger perPart = totalTokens.divide(BigInteger.valueOf(parts)); +// the range owned is so tiny we can't split it: +if (perPart.equals(BigInteger.ZERO)) +return Collections.singletonList(partitioner.getMaximumToken()); if (dontSplitRanges) return splitOwnedRangesNoPartialRanges(localRanges, perPart, parts);
[3/3] cassandra git commit: Merge branch 'cassandra-3.3' into trunk
Merge branch 'cassandra-3.3' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7829a0a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7829a0a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7829a0a Branch: refs/heads/trunk Commit: c7829a0a65e74420b02e907c9c6b40cb758c566b Parents: 41f5d22 8bc8fa3 Author: Marcus Eriksson Authored: Fri Jan 29 22:04:52 2016 +0100 Committer: Marcus Eriksson Committed: Fri Jan 29 22:04:52 2016 +0100 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/dht/Splitter.java | 3 +++ 2 files changed, 5 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7829a0a/CHANGES.txt -- diff --cc CHANGES.txt index 2584d45,9d58926..9f54f72 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,18 -1,6 +1,20 @@@ +3.4 + * Set javac encoding to utf-8 (CASSANDRA-11077) + * Integrate SASI index into Cassandra (CASSANDRA-10661) + * Add --skip-flush option to nodetool snapshot + * Skip values for non-queried columns (CASSANDRA-10657) + * Add support for secondary indexes on static columns (CASSANDRA-8103) + * CommitLogUpgradeTestMaker creates broken commit logs (CASSANDRA-11051) + * Add metric for number of dropped mutations (CASSANDRA-10866) + * Simplify row cache invalidation code (CASSANDRA-10396) + * Support user-defined compaction through nodetool (CASSANDRA-10660) + * Stripe view locks by key and table ID to reduce contention (CASSANDRA-10981) + * Add nodetool gettimeout and settimeout commands (CASSANDRA-10953) + * Add 3.0 metadata to sstablemetadata output (CASSANDRA-10838) + 3.3 + * Avoid infinite loop if owned range is smaller than number of +data dirs (CASSANDRA-11034) * Avoid bootstrap hanging when existing nodes have no data to stream (CASSANDRA-11010) Merged from 3.0: * (cqlsh) handle INSERT and UPDATE statements with LWT conditions correctly
[jira] [Commented] (CASSANDRA-10779) Mutations do not block for completion under view lock contention
[ https://issues.apache.org/jira/browse/CASSANDRA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124163#comment-15124163 ] T Jake Luciani commented on CASSANDRA-10779: No difference in the tests with the new changes. Likely due to the fact the mvbench test runs at quorum so the coordinator was waiting on the verb response > Mutations do not block for completion under view lock contention > > > Key: CASSANDRA-10779 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10779 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths > Environment: Windows 7 64-bit, Cassandra v3.0.0, Java 1.8u60 >Reporter: Will Zhang >Assignee: Tyler Hobbs > Fix For: 3.0.x, 3.x > > > Hi guys, > I encountered the following warning message when I was testing to upgrade > from v2.2.2 to v3.0.0. > It looks like a write time-out but in an uncaught exception. Could this be an > easy fix? > Log file section below. Thank you! > {code} > WARN [SharedPool-Worker-64] 2015-11-26 14:04:24,678 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-64,10,main]: {} > org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - > received only 0 responses. > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:427) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:386) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > org.apache.cassandra.db.Keyspace.lambda$apply$59(Keyspace.java:435) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.0.0.jar:3.0.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > INFO [IndexSummaryManager:1] 2015-11-26 14:41:10,527 > IndexSummaryManager.java:257 - Redistributing index summaries > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11097) Idle session timeout for secure environments
Jeff Jirsa created CASSANDRA-11097: -- Summary: Idle session timeout for secure environments Key: CASSANDRA-11097 URL: https://issues.apache.org/jira/browse/CASSANDRA-11097 Project: Cassandra Issue Type: Improvement Reporter: Jeff Jirsa Priority: Minor A thread on the user list pointed out that some use cases may prefer to have a database disconnect sessions after some idle timeout. An example would be an administrator who connected via ssh+cqlsh and then walked away. Disconnecting that user and forcing it to re-authenticate could protect against unauthorized access. It seems like it may be possible to do this using a netty {{IdleStateHandler}} in a way that's low risk and perhaps off by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11034) consistent_reads_after_move_test is failing on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124125#comment-15124125 ] Carl Yeksigian commented on CASSANDRA-11034: +1 > consistent_reads_after_move_test is failing on trunk > > > Key: CASSANDRA-11034 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11034 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Marcus Eriksson >Priority: Blocker > Labels: dtest > Fix For: 3.3 > > Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, > node3.log, node3_debug.log > > > The novnode dtest > {{consistent_bootstrap_test.TestBootstrapConsistency.consistent_reads_after_move_test}} > is failing on trunk. See an example failure > [here|http://cassci.datastax.com/job/trunk_novnode_dtest/274/testReport/consistent_bootstrap_test/TestBootstrapConsistency/consistent_reads_after_move_test/]. > On trunk I am getting an OOM of one of my C* nodes [node3], which is what > causes the nodetool move to fail. Logs are attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11096) Upgrade netty to >= 4.0.34
Brandon Williams created CASSANDRA-11096: Summary: Upgrade netty to >= 4.0.34 Key: CASSANDRA-11096 URL: https://issues.apache.org/jira/browse/CASSANDRA-11096 Project: Cassandra Issue Type: Improvement Components: CQL Reporter: Brandon Williams Fix For: 3.x Amongst other things, the native protocol will not bind ipv6 easily (see CASSANDRA-11047) until we upgrade. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11047) native protocol will not bind ipv6
[ https://issues.apache.org/jira/browse/CASSANDRA-11047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams resolved CASSANDRA-11047. -- Resolution: Fixed I saw that, thanks! I'll follow up with another ticket to upgrade netty. > native protocol will not bind ipv6 > -- > > Key: CASSANDRA-11047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11047 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Brandon Williams >Assignee: Norman Maurer > Fix For: 2.1.x, 2.2.x, 3.x > > > When you set rpc_address to 0.0.0.0 it should bind every interface. Of > course for ipv6 you have to comment out -Djava.net.preferIPv4Stack=true from > cassandra-env.sh, however this will not make the native protocol bind on > ipv6, only thrift: > {noformat} > tcp6 0 0 :::9160 :::*LISTEN > 13488/java > tcp6 0 0 0.0.0.0:9042:::*LISTEN > 13488/java > # telnet ::1 9160 > Trying ::1... > Connected to ::1. > Escape character is '^]'. > ^] > telnet> quit > Connection closed. > # telnet ::1 9042 > Trying ::1... > telnet: Unable to connect to remote host: Connection refused > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11047) native protocol will not bind ipv6
[ https://issues.apache.org/jira/browse/CASSANDRA-11047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124053#comment-15124053 ] Norman Maurer commented on CASSANDRA-11047: --- [~brandon.williams] netty 4.0.34.Final was released which has a fix for it. So I think it's up to you guys now to upgrade :) > native protocol will not bind ipv6 > -- > > Key: CASSANDRA-11047 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11047 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Brandon Williams >Assignee: Norman Maurer > Fix For: 2.1.x, 2.2.x, 3.x > > > When you set rpc_address to 0.0.0.0 it should bind every interface. Of > course for ipv6 you have to comment out -Djava.net.preferIPv4Stack=true from > cassandra-env.sh, however this will not make the native protocol bind on > ipv6, only thrift: > {noformat} > tcp6 0 0 :::9160 :::*LISTEN > 13488/java > tcp6 0 0 0.0.0.0:9042:::*LISTEN > 13488/java > # telnet ::1 9160 > Trying ::1... > Connected to ::1. > Escape character is '^]'. > ^] > telnet> quit > Connection closed. > # telnet ::1 9042 > Trying ::1... > telnet: Unable to connect to remote host: Connection refused > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10779) Mutations do not block for completion under view lock contention
[ https://issues.apache.org/jira/browse/CASSANDRA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15124020#comment-15124020 ] Carl Yeksigian commented on CASSANDRA-10779: I pushed a change so that we only use the CompletableFuture in mutation verb handler; otherwise we call {{.get()}} to turn it back into a synchronous call for the other places. [~tjake] kicked off new runs, so we'll see how it fares. We should get one or the other committed before we code freeze 3.3. > Mutations do not block for completion under view lock contention > > > Key: CASSANDRA-10779 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10779 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths > Environment: Windows 7 64-bit, Cassandra v3.0.0, Java 1.8u60 >Reporter: Will Zhang >Assignee: Tyler Hobbs > Fix For: 3.0.x, 3.x > > > Hi guys, > I encountered the following warning message when I was testing to upgrade > from v2.2.2 to v3.0.0. > It looks like a write time-out but in an uncaught exception. Could this be an > easy fix? > Log file section below. Thank you! > {code} > WARN [SharedPool-Worker-64] 2015-11-26 14:04:24,678 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-64,10,main]: {} > org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - > received only 0 responses. > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:427) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:386) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > org.apache.cassandra.db.Keyspace.lambda$apply$59(Keyspace.java:435) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.0.0.jar:3.0.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > INFO [IndexSummaryManager:1] 2015-11-26 14:41:10,527 > IndexSummaryManager.java:257 - Redistributing index summaries > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10779) Mutations do not block for completion under view lock contention
[ https://issues.apache.org/jira/browse/CASSANDRA-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123891#comment-15123891 ] Tyler Hobbs commented on CASSANDRA-10779: - Thanks for the benchmarks, [~tjake]. I'm good with going with passing the CompletableFuture up the stack, we just need to make sure all of the calling locations are blocking when necessary. It looks like we're missing a few. > Mutations do not block for completion under view lock contention > > > Key: CASSANDRA-10779 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10779 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths > Environment: Windows 7 64-bit, Cassandra v3.0.0, Java 1.8u60 >Reporter: Will Zhang >Assignee: Tyler Hobbs > Fix For: 3.0.x, 3.x > > > Hi guys, > I encountered the following warning message when I was testing to upgrade > from v2.2.2 to v3.0.0. > It looks like a write time-out but in an uncaught exception. Could this be an > easy fix? > Log file section below. Thank you! > {code} > WARN [SharedPool-Worker-64] 2015-11-26 14:04:24,678 > AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread > Thread[SharedPool-Worker-64,10,main]: {} > org.apache.cassandra.exceptions.WriteTimeoutException: Operation timed out - > received only 0 responses. > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:427) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:386) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > org.apache.cassandra.db.Keyspace.lambda$apply$59(Keyspace.java:435) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_60] > at > org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) > ~[apache-cassandra-3.0.0.jar:3.0.0] > at > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.0.0.jar:3.0.0] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60] > INFO [IndexSummaryManager:1] 2015-11-26 14:41:10,527 > IndexSummaryManager.java:257 - Redistributing index summaries > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11095) metadata_reset_while_compact_test dtest hangs servers
Michael Shuler created CASSANDRA-11095: -- Summary: metadata_reset_while_compact_test dtest hangs servers Key: CASSANDRA-11095 URL: https://issues.apache.org/jira/browse/CASSANDRA-11095 Project: Cassandra Issue Type: Test Components: Testing Environment: aws m3.2xlarge Reporter: Michael Shuler Assignee: DS Test Eng CASSANDRA-9831 was the meta-ticket that included this test for hanging up servers. Recently, this test was un-excluded and subsequently hung up test servers in 2.1, 2.2, 3.0 branch dtest runs. The most complete debug logs are in internal JIRA on 3.0 hang found: https://datastax.jira.com/browse/CSTAR-291 related 2.1, 2.2 run issues: https://datastax.jira.com/browse/CSTAR-297 This test has not been fixed, although it was reinstated, so test will be excluded again with this ticket specifically to address the problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11064) Failed aggregate creation breaks server permanently
[ https://issues.apache.org/jira/browse/CASSANDRA-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123599#comment-15123599 ] Sylvain Lebresne commented on CASSANDRA-11064: -- I forgot that we do the parsing of custom type literals with {{fromString()}} but in hindsight, I think it's a weakness that custom type values can't be provided as blob literals so that said value can be understood without knowing the type itself. So I think we should add that: we should support blobs for custom types. If we do so, we can use that form when we create our {{INITCOND}} term, which will avoid some pain for drivers. It's also theoretically more efficient than going through strings (that depends a bit on the custom type of course, but for DynamicCompositeType for instance, the string representation is pretty inefficient). If we do that, I don't think we'll depend on any update of the java driver here (not that upgrading the driver is a bad thing per-se). > Failed aggregate creation breaks server permanently > --- > > Key: CASSANDRA-11064 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11064 > Project: Cassandra > Issue Type: Bug >Reporter: Olivier Michallat >Assignee: Robert Stupp > > While testing edge cases around aggregates, I tried the following to see if > custom types were supported: > {code} > ccm create v321 -v3.2.1 -n3 > ccm updateconf enable_user_defined_functions:true > ccm start > ccm node1 cqlsh > CREATE FUNCTION id(i 'DynamicCompositeType(s => UTF8Type, i => Int32Type)') > RETURNS NULL ON NULL INPUT > RETURNS 'DynamicCompositeType(s => UTF8Type, i => Int32Type)' > LANGUAGE java > AS 'return i;'; > // function created successfully > CREATE AGGREGATE ag() > SFUNC id > STYPE 'DynamicCompositeType(s => UTF8Type, i => Int32Type)' > INITCOND 's@foo:i@32'; > ServerError: message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: > org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: > [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at > character '@'">{code} > Despite the error, the aggregate appears in system tables: > {code} > select * from system_schema.aggregates; > keyspace_name | aggregate_name | ... > ---++ ... > test | ag | ... > {code} > But you can't drop it, and trying to drop its function produces the server > error again: > {code} > DROP AGGREGATE ag; > InvalidRequest: code=2200 [Invalid query] message="Cannot drop non existing > aggregate 'test.ag'" > DROP FUNCTION id; > ServerError: message="java.lang.RuntimeException: java.util.concurrent.ExecutionException: > org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: > [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at > character '@'"> > {code} > What's worse, it's now impossible to restart the server: > {code} > ccm stop; ccm start > org.apache.cassandra.exceptions.SyntaxException: Failed parsing CQL term: > [s@foo:i@32] reason: SyntaxException line 1:1 no viable alternative at > character '@' > at > org.apache.cassandra.cql3.CQLFragmentParser.parseAny(CQLFragmentParser.java:48) > at org.apache.cassandra.cql3.Terms.asBytes(Terms.java:51) > at > org.apache.cassandra.schema.SchemaKeyspace.createUDAFromRow(SchemaKeyspace.java:1225) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchUDAs(SchemaKeyspace.java:1204) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchFunctions(SchemaKeyspace.java:1129) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:897) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:872) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:860) > at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:125) > at org.apache.cassandra.config.Schema.loadFromDisk(Schema.java:115) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:229) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:551) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:680) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8180) Optimize disk seek using min/max column name meta data when the LIMIT clause is used
[ https://issues.apache.org/jira/browse/CASSANDRA-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123535#comment-15123535 ] Benedict commented on CASSANDRA-8180: - One other nit: for debugging it's much clearer if we create named instances of {{Transformation}} - these can be declared inline in the method, so it's only one extra line of code. > Optimize disk seek using min/max column name meta data when the LIMIT clause > is used > > > Key: CASSANDRA-8180 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8180 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths > Environment: Cassandra 2.0.10 >Reporter: DOAN DuyHai >Assignee: Stefania >Priority: Minor > Fix For: 3.x > > Attachments: 8180_001.yaml, 8180_002.yaml > > > I was working on an example of sensor data table (timeseries) and face a use > case where C* does not optimize read on disk. > {code} > cqlsh:test> CREATE TABLE test(id int, col int, val text, PRIMARY KEY(id,col)) > WITH CLUSTERING ORDER BY (col DESC); > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 10, '10'); > ... > >nodetool flush test test > ... > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 20, '20'); > ... > >nodetool flush test test > ... > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 30, '30'); > ... > >nodetool flush test test > {code} > After that, I activate request tracing: > {code} > cqlsh:test> SELECT * FROM test WHERE id=1 LIMIT 1; > activity | > timestamp| source| source_elapsed > ---+--+---+ > execute_cql3_query | > 23:48:46,498 | 127.0.0.1 | 0 > Parsing SELECT * FROM test WHERE id=1 LIMIT 1; | > 23:48:46,498 | 127.0.0.1 | 74 >Preparing statement | > 23:48:46,499 | 127.0.0.1 |253 > Executing single-partition query on test | > 23:48:46,499 | 127.0.0.1 |930 > Acquiring sstable references | > 23:48:46,499 | 127.0.0.1 |943 >Merging memtable tombstones | > 23:48:46,499 | 127.0.0.1 | 1032 >Key cache hit for sstable 3 | > 23:48:46,500 | 127.0.0.1 | 1160 >Seeking to partition beginning in data file | > 23:48:46,500 | 127.0.0.1 | 1173 >Key cache hit for sstable 2 | > 23:48:46,500 | 127.0.0.1 | 1889 >Seeking to partition beginning in data file | > 23:48:46,500 | 127.0.0.1 | 1901 >Key cache hit for sstable 1 | > 23:48:46,501 | 127.0.0.1 | 2373 >Seeking to partition beginning in data file | > 23:48:46,501 | 127.0.0.1 | 2384 > Skipped 0/3 non-slice-intersecting sstables, included 0 due to tombstones | > 23:48:46,501 | 127.0.0.1 | 2768 > Merging data from memtables and 3 sstables | > 23:48:46,501 | 127.0.0.1 | 2784 > Read 2 live and 0 tombstoned cells | > 23:48:46,501 | 127.0.0.1 | 2976 > Request complete | > 23:48:46,501 | 127.0.0.1 | 3551 > {code} > We can clearly see that C* hits 3 SSTables on disk instead of just one, > although it has the min/max column meta data to decide which SSTable contains > the most recent data. > Funny enough, if we add a clause on the clustering column to the select, this > time C* optimizes the read path: > {code} > cqlsh:test> SELECT * FROM test WHERE id=1 AND col > 25 LIMIT 1; > activity | > timestamp| source| source_elapsed > ---+--+---+ > execute_cql3_query | > 23:52:31,888 | 127.0.0.1 | 0 >Parsing SELECT * FROM test WHERE id=1 AND col > 25 LIMIT 1; | > 23:52:31,888 | 127.0.0.1 | 60 >Preparing statement | > 23:52:31,888 | 127.0.0.1 |277 >
[jira] [Commented] (CASSANDRA-11091) Insufficient disk space in memtable flush should trigger disk fail policy
[ https://issues.apache.org/jira/browse/CASSANDRA-11091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123531#comment-15123531 ] Yuki Morishita commented on CASSANDRA-11091: I think this is discussed in CASSANDRA-7275 and we are going to fix it in CASSANDRA-8496. > Insufficient disk space in memtable flush should trigger disk fail policy > - > > Key: CASSANDRA-11091 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11091 > Project: Cassandra > Issue Type: Bug >Reporter: Richard Low > > If there's insufficient disk space to flush, > DiskAwareRunnable.getWriteDirectory throws and the flush fails. The > commitlogs then grow indefinitely because the latch is never counted down. > This should be an FSError so the disk fail policy is triggered. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11034) consistent_reads_after_move_test is failing on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-11034: - Reviewer: Carl Yeksigian > consistent_reads_after_move_test is failing on trunk > > > Key: CASSANDRA-11034 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11034 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Marcus Eriksson >Priority: Blocker > Labels: dtest > Fix For: 3.3 > > Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, > node3.log, node3_debug.log > > > The novnode dtest > {{consistent_bootstrap_test.TestBootstrapConsistency.consistent_reads_after_move_test}} > is failing on trunk. See an example failure > [here|http://cassci.datastax.com/job/trunk_novnode_dtest/274/testReport/consistent_bootstrap_test/TestBootstrapConsistency/consistent_reads_after_move_test/]. > On trunk I am getting an OOM of one of my C* nodes [node3], which is what > causes the nodetool move to fail. Logs are attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-11094) Upgrade from 1.1.0 to 1.2.19 - loss of data - convertFromDiskFormat exception
Hervé Toulan created CASSANDRA-11094: Summary: Upgrade from 1.1.0 to 1.2.19 - loss of data - convertFromDiskFormat exception Key: CASSANDRA-11094 URL: https://issues.apache.org/jira/browse/CASSANDRA-11094 Project: Cassandra Issue Type: Bug Components: Core, Distributed Metadata Environment: Red Hat Enterprise Linux ES release 4 (Nahant Update 8) 32 bits JVM 1.6.0_20 Previous Cassandra version 0.6.12, 1.1.0 Reporter: Hervé Toulan Attachments: systemN1-1.log, systemN1-2.log We have lost data in (at least) one column family after the upgrade from Cassandra 1.1 to 1.2.19. The ring is composed by 2 nodes (N1-1 N1-2) We upgraded first N1-1 (N1-2 still alive during upgrade) N1-1 upgraded (N1-2 still alive - gossip restarted now) We upgraded then N1-2 (N1-1 still alive during upgrade) I don't see any errors in N1-1 upgrade logs, I see error 50 minutes after the upgrade on the colum family where we thiink we've lost data. Find in attachment the system.logs of the 2 nodes. Upgrade has been started the 2016-01-11 22:57:01,327 for N1-1 Upgrade has been started the 2016-01-11 23:16:34,177 for N2-1 The column family incriminated is /opt/Alcatel/database/data/BNPPFortis/VoiceMail/ Thanks for your support. Hervé -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8180) Optimize disk seek using min/max column name meta data when the LIMIT clause is used
[ https://issues.apache.org/jira/browse/CASSANDRA-8180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123275#comment-15123275 ] Branimir Lambov commented on CASSANDRA-8180: LGTM. Nits (feel free to ignore): You can use {{count}} instead of {{mapToInt(e -> 1).sum()}}. The {{ListUtils}} terminology is strange, {{union}} is the concatenation and {{sum}} is the union? It's also not very efficient. I don't think the two separate lists make a lot of sense versus using one with an {{instanceof}} check in {{onPartitionClose}}. bq. BaseRows eagerly caches the static row in its constructor Isn't {{mergeIterator.mergeStaticRows}} the one that does this? Or is the problem the single-source case? In any case, you could override {{UnfilteredRowIteratorWithLowerBound.staticRow}} to return empty if no static columns are required. > Optimize disk seek using min/max column name meta data when the LIMIT clause > is used > > > Key: CASSANDRA-8180 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8180 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths > Environment: Cassandra 2.0.10 >Reporter: DOAN DuyHai >Assignee: Stefania >Priority: Minor > Fix For: 3.x > > Attachments: 8180_001.yaml, 8180_002.yaml > > > I was working on an example of sensor data table (timeseries) and face a use > case where C* does not optimize read on disk. > {code} > cqlsh:test> CREATE TABLE test(id int, col int, val text, PRIMARY KEY(id,col)) > WITH CLUSTERING ORDER BY (col DESC); > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 10, '10'); > ... > >nodetool flush test test > ... > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 20, '20'); > ... > >nodetool flush test test > ... > cqlsh:test> INSERT INTO test(id, col , val ) VALUES ( 1, 30, '30'); > ... > >nodetool flush test test > {code} > After that, I activate request tracing: > {code} > cqlsh:test> SELECT * FROM test WHERE id=1 LIMIT 1; > activity | > timestamp| source| source_elapsed > ---+--+---+ > execute_cql3_query | > 23:48:46,498 | 127.0.0.1 | 0 > Parsing SELECT * FROM test WHERE id=1 LIMIT 1; | > 23:48:46,498 | 127.0.0.1 | 74 >Preparing statement | > 23:48:46,499 | 127.0.0.1 |253 > Executing single-partition query on test | > 23:48:46,499 | 127.0.0.1 |930 > Acquiring sstable references | > 23:48:46,499 | 127.0.0.1 |943 >Merging memtable tombstones | > 23:48:46,499 | 127.0.0.1 | 1032 >Key cache hit for sstable 3 | > 23:48:46,500 | 127.0.0.1 | 1160 >Seeking to partition beginning in data file | > 23:48:46,500 | 127.0.0.1 | 1173 >Key cache hit for sstable 2 | > 23:48:46,500 | 127.0.0.1 | 1889 >Seeking to partition beginning in data file | > 23:48:46,500 | 127.0.0.1 | 1901 >Key cache hit for sstable 1 | > 23:48:46,501 | 127.0.0.1 | 2373 >Seeking to partition beginning in data file | > 23:48:46,501 | 127.0.0.1 | 2384 > Skipped 0/3 non-slice-intersecting sstables, included 0 due to tombstones | > 23:48:46,501 | 127.0.0.1 | 2768 > Merging data from memtables and 3 sstables | > 23:48:46,501 | 127.0.0.1 | 2784 > Read 2 live and 0 tombstoned cells | > 23:48:46,501 | 127.0.0.1 | 2976 > Request complete | > 23:48:46,501 | 127.0.0.1 | 3551 > {code} > We can clearly see that C* hits 3 SSTables on disk instead of just one, > although it has the min/max column meta data to decide which SSTable contains > the most recent data. > Funny enough, if we add a clause on the clustering column to the select, this > time C* optimizes the read path: > {code} > cqlsh:test> SELECT * FROM test WHERE id=1 AND col > 25 LIMIT 1; > activity | > timestamp| source| source_elapsed > --
[jira] [Updated] (CASSANDRA-11034) consistent_reads_after_move_test is failing on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-11034: Priority: Blocker (was: Major) > consistent_reads_after_move_test is failing on trunk > > > Key: CASSANDRA-11034 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11034 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Marcus Eriksson >Priority: Blocker > Labels: dtest > Fix For: 3.3 > > Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, > node3.log, node3_debug.log > > > The novnode dtest > {{consistent_bootstrap_test.TestBootstrapConsistency.consistent_reads_after_move_test}} > is failing on trunk. See an example failure > [here|http://cassci.datastax.com/job/trunk_novnode_dtest/274/testReport/consistent_bootstrap_test/TestBootstrapConsistency/consistent_reads_after_move_test/]. > On trunk I am getting an OOM of one of my C* nodes [node3], which is what > causes the nodetool move to fail. Logs are attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11034) consistent_reads_after_move_test is failing on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123246#comment-15123246 ] Marcus Eriksson edited comment on CASSANDRA-11034 at 1/29/16 9:30 AM: -- we move the token so that the RF=1 keyspace gets a 2 token range, meaning we can't split it in 3 parts and we go into an infinite loop in splitter https://github.com/krummas/cassandra/commits/marcuse/11034 tests: http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-11034-testall/ http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-11034-dtest/ setting to blocker for 3.3 to make sure we get it in was (Author: krummas): we move the token so that the RF=1 keyspace gets a 2 token range, meaning we can't split it in 3 parts and we go into an infinite loop in splitter https://github.com/krummas/cassandra/commits/marcuse/11034 tests: http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-11034-testall/ http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-11034-dtest/ > consistent_reads_after_move_test is failing on trunk > > > Key: CASSANDRA-11034 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11034 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Marcus Eriksson >Priority: Blocker > Labels: dtest > Fix For: 3.3 > > Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, > node3.log, node3_debug.log > > > The novnode dtest > {{consistent_bootstrap_test.TestBootstrapConsistency.consistent_reads_after_move_test}} > is failing on trunk. See an example failure > [here|http://cassci.datastax.com/job/trunk_novnode_dtest/274/testReport/consistent_bootstrap_test/TestBootstrapConsistency/consistent_reads_after_move_test/]. > On trunk I am getting an OOM of one of my C* nodes [node3], which is what > causes the nodetool move to fail. Logs are attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10938) test_bulk_round_trip_blogposts is failing occasionally
[ https://issues.apache.org/jira/browse/CASSANDRA-10938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15123243#comment-15123243 ] Stefania commented on CASSANDRA-10938: -- At least for the first failure it's probably due to the fact that we don't write at consistency level ALL, since we have increased the replication factor to 3 and COP Y TO only queries one replica. > test_bulk_round_trip_blogposts is failing occasionally > -- > > Key: CASSANDRA-10938 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10938 > Project: Cassandra > Issue Type: Sub-task > Components: Tools >Reporter: Stefania >Assignee: Stefania > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 6452.nps, 6452.png, 7300.nps, 7300a.png, 7300b.png, > node1_debug.log, node2_debug.log, node3_debug.log, recording_127.0.0.1.jfr > > > We get timeouts occasionally that cause the number of records to be incorrect: > http://cassci.datastax.com/job/trunk_dtest/858/testReport/cqlsh_tests.cqlsh_copy_tests/CqlshCopyTest/test_bulk_round_trip_blogposts/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11034) consistent_reads_after_move_test is failing on trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-11034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15121906#comment-15121906 ] Marcus Eriksson edited comment on CASSANDRA-11034 at 1/29/16 9:25 AM: -- -This is actually caused by CASSANDRA-10887 we don't see the same pending ranges during the move as we did before, trying to figure out why- edit: nope, my error all along was (Author: krummas): This is actually caused by CASSANDRA-10887 we don't see the same pending ranges during the move as we did before, trying to figure out why > consistent_reads_after_move_test is failing on trunk > > > Key: CASSANDRA-11034 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11034 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Marcus Eriksson > Labels: dtest > Fix For: 3.x > > Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, > node3.log, node3_debug.log > > > The novnode dtest > {{consistent_bootstrap_test.TestBootstrapConsistency.consistent_reads_after_move_test}} > is failing on trunk. See an example failure > [here|http://cassci.datastax.com/job/trunk_novnode_dtest/274/testReport/consistent_bootstrap_test/TestBootstrapConsistency/consistent_reads_after_move_test/]. > On trunk I am getting an OOM of one of my C* nodes [node3], which is what > causes the nodetool move to fail. Logs are attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15122654#comment-15122654 ] Stefania edited comment on CASSANDRA-11030 at 1/29/16 8:31 AM: --- +1, thanks! Just one note: {{patch by pauloricardomg; reviewed by stef1927 for CASSANDRA-11030}}, I think we normally use the git users rather than the GH handles, so it should be {{patch by Paulo Motta; reviewed by Stefania Alborghetti for CASSANDRA-11030}}. -I leave it to you to move the ticket to ready for commit once you have indicated which patches automatically merge up.- Please see comment below. was (Author: stefania): +1, thanks! Just one note: {{patch by pauloricardomg; reviewed by stef1927 for CASSANDRA-11030}}, I think we normally use the git users rather than the GH handles, so it should be {{patch by Paulo Motta; reviewed by Stefania Alborghetti for CASSANDRA-11030}}. I leave it to you to move the ticket to ready for commit once you have indicated which patches automatically merge up. > utf-8 characters incorrectly displayed/inserted on cqlsh on Windows > --- > > Key: CASSANDRA-11030 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11030 > Project: Cassandra > Issue Type: Bug >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Labels: cqlsh, windows > > {noformat} > C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat > --encoding utf-8 > Connected to test at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4] > Use HELP for help. > cqlsh> INSERT INTO bla.test (bla ) VALUES ('não') ; > cqlsh> select * from bla.test; > bla > - > n?o > (1 rows) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15122942#comment-15122942 ] Stefania edited comment on CASSANDRA-11030 at 1/29/16 8:30 AM: --- I think we need to move the codec registration further up, I just stumbled into this: {code} C:\Users\stefania\git\cstar\cassandra>.\bin\cqlsh.bat --help Traceback (most recent call last): File "C:\Users\stefania\git\cstar\cassandra\bin\\cqlsh.py", line 220, in (options, arguments) = parser.parse_args(sys.argv[1:], values=optvalues) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1400, in parse_args stop = self._process_args(largs, rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1440, in _process_args self._process_long_opt(rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1515, in _process_long_opt option.process(opt, value, values, self) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 789, in process self.action, self.dest, opt, value, values, parser) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 811, in take_action parser.print_help() File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1670, in print_help file.write(self.format_help().encode(encoding, "replace")) LookupError: unknown encoding: cp65001 C:\Users\stefania\git\cstar\cassandra>.\bin\cqlsh.bat --encoding utf-8 --help Traceback (most recent call last): File "C:\Users\stefania\git\cstar\cassandra\bin\\cqlsh.py", line 220, in (options, arguments) = parser.parse_args(sys.argv[1:], values=optvalues) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1400, in parse_args stop = self._process_args(largs, rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1440, in _process_args self._process_long_opt(rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1515, in _process_long_opt option.process(opt, value, values, self) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 789, in process self.action, self.dest, opt, value, values, parser) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 811, in take_action parser.print_help() File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1670, in print_help file.write(self.format_help().encode(encoding, "replace")) LookupError: unknown encoding: cp65001 {code} -Also, on Windows 7 I still get the "?" even with cp650001- It works on one Windows 7 laptop running python 2.7.10 but it fails on my newer laptop running 2.7.11. Which version are you running? I tried downgrading to 2.7.10 but to no avail, even if I get the older installers it still says version 2.7.11. was (Author: stefania): I think we need to move the codec registration further up, I just stumbled into this: {code} C:\Users\stefania\git\cstar\cassandra>.\bin\cqlsh.bat --help Traceback (most recent call last): File "C:\Users\stefania\git\cstar\cassandra\bin\\cqlsh.py", line 220, in (options, arguments) = parser.parse_args(sys.argv[1:], values=optvalues) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1400, in parse_args stop = self._process_args(largs, rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1440, in _process_args self._process_long_opt(rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1515, in _process_long_opt option.process(opt, value, values, self) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 789, in process self.action, self.dest, opt, value, values, parser) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 811, in take_action parser.print_help() File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1670, in print_help file.write(self.format_help().encode(encoding, "replace")) LookupError: unknown encoding: cp65001 C:\Users\stefania\git\cstar\cassandra>.\bin\cqlsh.bat --encoding utf-8 --help Traceback (most recent call last): File "C:\Users\stefania\git\cstar\cassandra\bin\\cqlsh.py", line 220, in (options, arguments) = parser.parse_args(sys.argv[1:], values=optvalues) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1400, in parse_args stop = self._process_args(largs, rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1440, in _process_args self._process_long_opt(rargs, values) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 1515, in _process_long_opt option.process(opt, value, values, self) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 789, in process self.action, self.dest, opt, value, values, parser) File "C:\Program Files\Python\2.7.11\lib\optparse.py", line 811, in take_action parser.print_help() File "C:\Pr