[jira] [Created] (CASSANDRA-11041) Make it clear what timestamp_resolution is used for with DTCS

2016-01-19 Thread Marcus Eriksson (JIRA)
Marcus Eriksson created CASSANDRA-11041:
---

 Summary: Make it clear what timestamp_resolution is used for with 
DTCS
 Key: CASSANDRA-11041
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11041
 Project: Cassandra
  Issue Type: Improvement
Reporter: Marcus Eriksson


We have had a few cases lately where users misunderstand what 
timestamp_resolution does, we should;

* make the option not autocomplete in cqlsh
* update documentation
* log a warning



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9752) incremental repair dtest flaps on 2.2

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15108165#comment-15108165
 ] 

Marcus Eriksson commented on CASSANDRA-9752:


Locally with vnodes it passes with CCM_MAX_HEAP_SIZE=2048M, but not with 1024M

> incremental repair dtest flaps on 2.2 
> --
>
> Key: CASSANDRA-9752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>
> {{incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test}} 
> flaps on 2.2. It's hard to tell what failures are repair-specific, but there 
> are a few distinct failures I've seen recently:
> - [an NPE in 
> StorageService|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/143/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> - [an NPE in 
> SSTableRewriter|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/135/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/].
>  I believe this is related to CASSANDRA-9730, but someone should confirm this.
> - [an on-disk data size that is too 
> large|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/133/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> You can find the test itself [here on 
> GitHub|https://github.com/riptano/cassandra-dtest/blob/master/incremental_repair_test.py#L206]
>  and run it with the command
> {code}
> CASSANDRA_VERSION=git:trunk nosetests 
> incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test
> {code}
> Assigning [~yukim], since you're the repair person, but feel free to reassign 
> to whoever's appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10909) NPE in ActiveRepairService

2016-01-19 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-10909:

Fix Version/s: (was: 3.3)
   (was: 3.0.3)
   (was: 2.1.13)
   (was: 2.2.5)
   3.x
   3.0.x
   2.2.x
   2.1.x

> NPE in ActiveRepairService 
> ---
>
> Key: CASSANDRA-10909
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10909
> Project: Cassandra
>  Issue Type: Bug
> Environment: cassandra-3.0.1.777
>Reporter: Eduard Tudenhoefner
>Assignee: Marcus Eriksson
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> NPE after one started multiple incremental repairs
> {code}
> INFO  [Thread-62] 2015-12-21 11:40:53,742  RepairRunnable.java:125 - Starting 
> repair command #1, repairing keyspace keyspace1 with repair options 
> (parallelism: parallel, primary range: false, incremental: true, job threads: 
> 1, ColumnFamilies: [], dataCenters: [], hosts: [], # of ranges: 2)
> INFO  [Thread-62] 2015-12-21 11:40:53,813  RepairSession.java:237 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
> /10.200.177.33 on range [(10,-9223372036854775808]] for keyspace1.[counter1, 
> standard1]
> INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:100 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for counter1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Repair#1:1] 2015-12-21 11:40:53,853  RepairJob.java:174 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for counter1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Thread-62] 2015-12-21 11:40:53,854  RepairSession.java:237 - [repair 
> #b1449fe0-a7d7-11e5-b568-f565b837eb0d] new session: will sync /10.200.177.32, 
> /10.200.177.31 on range [(0,10]] for keyspace1.[counter1, standard1]
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,896  RepairSession.java:181 - 
> [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
> counter1 from /10.200.177.32
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:53,906  RepairSession.java:181 - 
> [repair #b13e3740-a7d7-11e5-b568-f565b837eb0d] Received merkle tree for 
> counter1 from /10.200.177.33
> INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:100 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] requesting merkle trees for standard1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [Repair#1:1] 2015-12-21 11:40:53,906  RepairJob.java:174 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Requesting merkle trees for standard1 
> (to [/10.200.177.33, /10.200.177.32])
> INFO  [RepairJobTask:2] 2015-12-21 11:40:53,910  SyncTask.java:66 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] Endpoints /10.200.177.33 and 
> /10.200.177.32 are consistent for counter1
> INFO  [RepairJobTask:1] 2015-12-21 11:40:53,910  RepairJob.java:145 - [repair 
> #b13e3740-a7d7-11e5-b568-f565b837eb0d] counter1 is fully synced
> INFO  [AntiEntropyStage:1] 2015-12-21 11:40:54,823  Validator.java:272 - 
> [repair #b17a2ed0-a7d7-11e5-ada8-8304f5629908] Sending completed merkle tree 
> to /10.200.177.33 for keyspace1.counter1
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,104  
> CompactionManager.java:1065 - Cannot start multiple repair sessions over the 
> same sstables
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,105  Validator.java:259 - 
> Failed creating a merkle tree for [repair 
> #b17a2ed0-a7d7-11e5-ada8-8304f5629908 on keyspace1/standard1, 
> [(10,-9223372036854775808]]], /10.200.177.33 (see log for details)
> ERROR [ValidationExecutor:3] 2015-12-21 11:40:55,110  
> CassandraDaemon.java:195 - Exception in thread 
> Thread[ValidationExecutor:3,1,main]
> java.lang.RuntimeException: Cannot start multiple repair sessions over the 
> same sstables
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1066)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:80)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at 
> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:679)
>  ~[cassandra-all-3.0.1.777.jar:3.0.1.777]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_40]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[na:1.8.0_40]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_40]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
> ERROR [AntiEntropyStage:1] 2015-12-21 11:40:55,174  
> RepairMessageVerbHandler.java:161 - Got error, 

[jira] [Commented] (CASSANDRA-10446) Run repair with down replicas

2016-01-19 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15108146#comment-15108146
 ] 

Anuj Wadehra commented on CASSANDRA-10446:
--

Whether its bug or an improvement is debatable. The intent of the suggestion to 
increase the priority and change the type was to ensure that it gets due 
attention. I think by giving detailed scneario, I have tried to explain the 
critically of the issue. No, I was not interested in working on this.  

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
> Fix For: 3.x
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10446) Run repair with down replicas

2016-01-19 Thread sankalp kohli (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15108124#comment-15108124
 ] 

sankalp kohli commented on CASSANDRA-10446:
---

This is an improvement and not a bug. Seems like you are interested in working 
on it...Should I assign it to you? 

> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
> Fix For: 3.x
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-11040) Encrypted hints

2016-01-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15108053#comment-15108053
 ] 

Jason Brown edited comment on CASSANDRA-11040 at 1/20/16 5:53 AM:
--

Code is available 
[here|https://github.com/apache/cassandra/compare/trunk...jasobrown:11040]

This code piggy backs off the existing hints compression code path, as well as 
the existing file-level encryption infrastructure. Thus, this code fits in 
rather smoothly, just a little refactoring was necessary to make things lay out 
nicely. 

cassci tests running as I type this; will update ticket when complete.


was (Author: jasobrown):
Code is available 
[here|https://github.com/apache/cassandra/compare/trunk...jasobrown:11040]

This code piggy backs off the existing hints compression code path, as well as 
the existing file-level encryption infrastructure. Thus, this code fits in 
rather smoothly, just a little refactoring was necessary to make things lay out 
nicely. 

> Encrypted hints
> ---
>
> Key: CASSANDRA-11040
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11040
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: encryption, hints, security
>
> When users enable system-wide encryption (which includes commit logs, 
> CASSANDRA-6018), we need to encrypt other assets, as well. Hence, let's 
> encrypt hints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11040) Encrypted hints

2016-01-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15108053#comment-15108053
 ] 

Jason Brown commented on CASSANDRA-11040:
-

Code is available 
[here|https://github.com/apache/cassandra/compare/trunk...jasobrown:11040]

This code piggy backs off the existing hints compression code path, as well as 
the existing file-level encryption infrastructure. Thus, this code fits in 
rather smoothly, just a little refactoring was necessary to make things lay out 
nicely. 

> Encrypted hints
> ---
>
> Key: CASSANDRA-11040
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11040
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jason Brown
>Assignee: Jason Brown
>Priority: Minor
>  Labels: encryption, hints, security
>
> When users enable system-wide encryption (which includes commit logs, 
> CASSANDRA-6018), we need to encrypt other assets, as well. Hence, let's 
> encrypt hints.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-8103) Secondary Indices for Static Columns

2016-01-19 Thread Taiyuan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taiyuan Zhang updated CASSANDRA-8103:
-
Attachment: (was: 8103-v3.patch)

> Secondary Indices for Static Columns
> 
>
> Key: CASSANDRA-8103
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8103
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Ron Cohen
>Assignee: Taiyuan Zhang
> Fix For: 3.x
>
> Attachments: 8103-v4.patch, 8103.patch, in_progress.patch, 
> smoke-test.cql
>
>
> We should add secondary index support for static columns.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11040) Encrypted hints

2016-01-19 Thread Jason Brown (JIRA)
Jason Brown created CASSANDRA-11040:
---

 Summary: Encrypted hints
 Key: CASSANDRA-11040
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11040
 Project: Cassandra
  Issue Type: Improvement
Reporter: Jason Brown
Assignee: Jason Brown
Priority: Minor


When users enable system-wide encryption (which includes commit logs, 
CASSANDRA-6018), we need to encrypt other assets, as well. Hence, let's encrypt 
hints.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10446) Run repair with down replicas

2016-01-19 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107876#comment-15107876
 ] 

Anuj Wadehra edited comment on CASSANDRA-10446 at 1/20/16 3:08 AM:
---

I think, this is an important issue. We should increase the priority and change 
the type from Improvement to Bug so that it gets due attention.

Consider following scenario and flow of events which demonstrate the importance 
of this issue:

Scenario: I have a 20 node cluster, RF=5, Read/Write Quorum, gc grace period=20 
days. I think that my Cassandra cluster is fault tolerant and it can afford 2 
node failures.

Suddenly, one node goes down due to some hardware issue. The failed node would 
prevent repair on many nodes in the cluster as it had approximately 5/20th 
share of total data ..1/20 which it owns and 4/20 which is stored as replica of 
data owned by other nodes. Now Its 10 days since the node is down, most of the 
nodes are not being repaired and now its DECISION time for me. I am not sure 
how soon the issue would be fixed may be next 2 days i.e. 8 days before gc 
grace period, so I shouldn't remove node early and add the node back as it 
would cause significant and unnecessary streaming due to token re-arrangement. 
At the same time, if I don't remove the failed node at this time i.e. 10 days 
after failure (much before gc grace) and wait for the issue to be resolved, my 
entire system health would be in question and it would be a panic situation as 
most of the data didn't get repaired in last 10 days and gc grace is 
approaching. I need sufficient time to repair all nodes before the gc grace 
period ends.

What looked like a fault tolerant Cassandra cluster which can easily afford 2 
node failure, will require urgent attention and manual decision making each 
time a single node goes down- just like it happened in the above scenario. 

If some replicas are down, we should allow Repair to proceed with remaining 
replicas. If failed nodes comes up before gc grace period, we would run repair 
to fix inconsistencies. Otherwise, we would discard failed node data and 
bootstrap. I think that would be a really robust fault tolerant system.




was (Author: eanujwa):
I think, this is an issue with the way we handled the scenario of a downed 
replica in repairs. We should increase the priority and change the type from 
Improvement to Bug so that it gets due attention.

Consider following scenario and flow of events which demonstrate the importance 
of this issue:

Scenario: I have a 20 node cluster, RF=5, Read/Write Quorum, gc grace period=20 
days. I think that my Cassandra cluster is fault tolerant and it can afford 2 
node failures.

Suddenly, one node goes down due to some hardware issue. The failed node would 
prevent repair on many nodes in the cluster as it had approximately 5/20th 
share of total data ..1/20 which it owns and 4/20 which is stored as replica of 
data owned by other nodes. Now Its 10 days since the node is down, most of the 
nodes are not being repaired and now its DECISION time for me. I am not sure 
how soon the issue would be fixed may be next 2 days i.e. 8 days before gc 
grace period, so I shouldn't remove node early and add the node back as it 
would cause significant and unnecessary streaming due to token re-arrangement. 
At the same time, if I don't remove the failed node at this time i.e. 10 days 
after failure (much before gc grace) and wait for the issue to be resolved, my 
entire system health would be in question and it would be a panic situation as 
most of the data didn't get repaired in last 10 days and gc grace is 
approaching. I need sufficient time to repair all nodes before the gc grace 
period ends.

What looked like a fault tolerant Cassandra cluster which can easily afford 2 
node failure, will require urgent attention and manual decision making each 
time a single node goes down- just like it happened in the above scenario. 

If some replicas are down, we should allow Repair to proceed with remaining 
replicas. If failed nodes comes up before gc grace period, we would run repair 
to fix inconsistencies. Otherwise, we would discard failed node data and 
bootstrap. I think that would be a really robust fault tolerant system.



> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
> Fix For: 3.x
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10446) Run repair with down replicas

2016-01-19 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107876#comment-15107876
 ] 

Anuj Wadehra edited comment on CASSANDRA-10446 at 1/20/16 3:06 AM:
---

I think, this is an issue with the way we handled the scenario of a downed 
replica in repairs. We should increase the priority and change the type from 
Improvement to Bug so that it gets due attention.

Consider following scenario and flow of events which demonstrate the importance 
of this issue:

Scenario: I have a 20 node cluster, RF=5, Read/Write Quorum, gc grace period=20 
days. I think that my Cassandra cluster is fault tolerant and it can afford 2 
node failures.

Suddenly, one node goes down due to some hardware issue. The failed node would 
prevent repair on many nodes in the cluster as it had approximately 5/20th 
share of total data ..1/20 which it owns and 4/20 which is stored as replica of 
data owned by other nodes. Now Its 10 days since the node is down, most of the 
nodes are not being repaired and now its DECISION time for me. I am not sure 
how soon the issue would be fixed may be next 2 days i.e. 8 days before gc 
grace period, so I shouldn't remove node early and add the node back as it 
would cause significant and unnecessary streaming due to token re-arrangement. 
At the same time, if I don't remove the failed node at this time i.e. 10 days 
after failure (much before gc grace) and wait for the issue to be resolved, my 
entire system health would be in question and it would be a panic situation as 
most of the data didn't get repaired in last 10 days and gc grace is 
approaching. I need sufficient time to repair all nodes before the gc grace 
period ends.

What looked like a fault tolerant Cassandra cluster which can easily afford 2 
node failure, will require urgent attention and manual decision making each 
time a single node goes down- just like it happened in the above scenario. 

If some replicas are down, we should allow Repair to proceed with remaining 
replicas. If failed nodes comes up before gc grace period, we would run repair 
to fix inconsistencies. Otherwise, we would discard failed node data and 
bootstrap. I think that would be a really robust fault tolerant system.




was (Author: eanujwa):
I think, this an issue with the way we handled the "downed replica" scenario in 
repairs. We should increase the priority and change the type from Improvement 
to Bug.

Consider following scenario and flow of events which demonstrate the importance 
of this issue:
Scenario: I have a 20 node clsuter, RF=5, Read/Write Quorum, gc grace 
period=20. My cluster is fault tolerant and it can afford 2 node failures.

Suddenly, one node goes down due to some hardware issue. The failed node would 
prevent repair on many nodes in the cluster as it has approximately 5/20th 
share of total data ..1/20 which it owns and 4/20 which is stored as replica of 
data owned by other nodes. Now Its 10 days since the node is down, most of the 
nodes are not being repaired and now its decision time. I am not sure how soon 
the issue would be fixed may be next 2 days i.e. 8 days before gc grace, so I 
shouldnt remove node early and add node back as it would cause significant and 
unnecessary streaming due to token re-arrangement. At the same time, if I dont 
remove the failed node at this time i.e. 10 days (much before gc grace), my 
entire system health would be in question and it would be a panic situation as 
most of the data didnt get repaired in last 10 days and gc grace is 
approaching. I need sufficient time to repair all nodes.
What looked like a fault tolerant Cassandra cluster which can easily afford 2 
node failure, required urgent attention and manual decision making when a 
single node went down. If some replicas are down, we should allow Repair to 
proceed with remaining replicas. If failed nodes comes up before gc grace 
period, we would run repair to fix inconsistencies and otheriwse we would 
discard data and bootstrap. I think that would be a really robust fault 
tolerant system.



> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
> Fix For: 3.x
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10446) Run repair with down replicas

2016-01-19 Thread Anuj Wadehra (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107876#comment-15107876
 ] 

Anuj Wadehra commented on CASSANDRA-10446:
--

I think, this an issue with the way we handled the "downed replica" scenario in 
repairs. We should increase the priority and change the type from Improvement 
to Bug.

Consider following scenario and flow of events which demonstrate the importance 
of this issue:
Scenario: I have a 20 node clsuter, RF=5, Read/Write Quorum, gc grace 
period=20. My cluster is fault tolerant and it can afford 2 node failures.

Suddenly, one node goes down due to some hardware issue. The failed node would 
prevent repair on many nodes in the cluster as it has approximately 5/20th 
share of total data ..1/20 which it owns and 4/20 which is stored as replica of 
data owned by other nodes. Now Its 10 days since the node is down, most of the 
nodes are not being repaired and now its decision time. I am not sure how soon 
the issue would be fixed may be next 2 days i.e. 8 days before gc grace, so I 
shouldnt remove node early and add node back as it would cause significant and 
unnecessary streaming due to token re-arrangement. At the same time, if I dont 
remove the failed node at this time i.e. 10 days (much before gc grace), my 
entire system health would be in question and it would be a panic situation as 
most of the data didnt get repaired in last 10 days and gc grace is 
approaching. I need sufficient time to repair all nodes.
What looked like a fault tolerant Cassandra cluster which can easily afford 2 
node failure, required urgent attention and manual decision making when a 
single node went down. If some replicas are down, we should allow Repair to 
proceed with remaining replicas. If failed nodes comes up before gc grace 
period, we would run repair to fix inconsistencies and otheriwse we would 
discard data and bootstrap. I think that would be a really robust fault 
tolerant system.



> Run repair with down replicas
> -
>
> Key: CASSANDRA-10446
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10446
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: sankalp kohli
>Priority: Minor
> Fix For: 3.x
>
>
> We should have an option of running repair when replicas are down. We can 
> call it -force.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows

2016-01-19 Thread Stefania (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefania updated CASSANDRA-11030:
-
Reviewer: Stefania

> utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
> ---
>
> Key: CASSANDRA-11030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11030
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh, windows
>
> {noformat}
> C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat 
> --encoding utf-8
> Connected to test at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
> Use HELP for help.
> cqlsh> INSERT INTO bla.test (bla ) VALUES  ('não') ;
> cqlsh> select * from bla.test;
>  bla
> -
>  n?o
> (1 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows

2016-01-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107797#comment-15107797
 ] 

Stefania commented on CASSANDRA-11030:
--

Looking at the 2.2. patch:

* At [line 
737|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-11030#diff-1cce67f7d76864f07aaf4d986d6fc051R737]
 why use {{sys.stdout.encoding}} rather than {{self.encoding}} or {{encoding}}?

* At [line 
767|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-11030#diff-1cce67f7d76864f07aaf4d986d6fc051R767]
 is the list of encodings complete, can they ever be specified as upper case?

* Can you rebase on trunk to make sure the [pep8 compliance 
failure|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11030-dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_pep8_compliance/]
 is not related?

Else it LGTM. 

I also only have Windows 7; [~JoshuaMcKenzie] are you running on Windows 10 by 
any chance?

> utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
> ---
>
> Key: CASSANDRA-11030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11030
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh, windows
>
> {noformat}
> C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat 
> --encoding utf-8
> Connected to test at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
> Use HELP for help.
> cqlsh> INSERT INTO bla.test (bla ) VALUES  ('não') ;
> cqlsh> select * from bla.test;
>  bla
> -
>  n?o
> (1 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11039) SegFault in Cassandra

2016-01-19 Thread Nimi Wariboko Jr. (JIRA)
Nimi Wariboko Jr. created CASSANDRA-11039:
-

 Summary: SegFault in Cassandra
 Key: CASSANDRA-11039
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11039
 Project: Cassandra
  Issue Type: Bug
 Environment: Kernel: Linux cass6 3.13.0-44-generic #73~precise1-Ubuntu 
SMP Wed Dec 17 00:39:15 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
OS: Ubuntu 12.04.5 LTS (GNU/Linux 3.13.0-44-generic x86_64)
JVM: 
  java version "1.8.0_66"
  Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
  Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

Reporter: Nimi Wariboko Jr.
 Fix For: 3.2


This occurred under quite heavy load.

Attached is the dump that was spit out by Cassandra, and my cassandra.yaml

hs_err_1453233896.log:
https://s3-us-west-1.amazonaws.com/channelmeter-misc/hs_err_1453233896.log

cassandra.yaml
https://s3-us-west-1.amazonaws.com/channelmeter-misc/cassandra.yaml

Process Options:
{code}
java -ea -Xms16G -Xmx16G -Xss256k -XX:+UseG1GC 
-XX:G1RSetUpdatingPauseTimePercent=5 -XX:MaxGCPauseMillis=500 
-XX:InitiatingHeapOccupancyPercent=70 -XX:+AlwaysPreTouch -XX:-UseBiasedLocking 
-XX:StringTableSize=103 -XX:+UseTLAB -XX:+ResizeTLAB 
-XX:+PerfDisableSharedMem 
-XX:CompileCommandFile=/etc/cassandra/hotspot_compiler 
-javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -XX:+UseThreadPriorities 
-XX:ThreadPriorityPolicy=42 -XX:+HeapDumpOnOutOfMemoryError 
-Djava.net.preferIPv4Stack=true -Dcassandra.jmx.local.port=7199 
-XX:+DisableExplicitGC -Djava.library.path=/usr/share/cassandra/lib/sigar-bin 
-Dcassandra.metricsReporterConfigFile=/etc/cassandra-metrics-graphite.yaml 
-Dcassandra.libjemalloc=- -Dlogback.configurationFile=logback.xml 
-Dcassandra.logdir=/var/log/cassandra -Dcassandra.storagedir=/var/lib/cassandra 
-Dcassandra-pidfile=/var/run/cassandra/cassandra.pid -cp 
/etc/cassandra:/usr/share/cassandra/lib/ST4-4.0.8.jar:/usr/share/cassandra/lib/airline-0.6.jar:/usr/share/cassandra/lib/antlr-runtime-3.5.2.jar:/usr/share/cassandra/lib/asm-5.0.4.jar:/usr/share/cassandra/lib/cassandra-driver-core-3.0.0-beta1-bb1bce4-SNAPSHOT-shaded.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/commons-math3-3.2.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.4.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/ecj-4.4.2.jar:/usr/share/cassandra/lib/guava-18.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.0.6.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.3.0.jar:/usr/share/cassandra/lib/javax.inject.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jcl-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/jna-4.0.0.jar:/usr/share/cassandra/lib/joda-time-2.4.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.2.jar:/usr/share/cassandra/lib/log4j-over-slf4j-1.7.7.jar:/usr/share/cassandra/lib/logback-classic-1.1.3.jar:/usr/share/cassandra/lib/logback-core-1.1.3.jar:/usr/share/cassandra/lib/lz4-1.3.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.0.jar:/usr/share/cassandra/lib/metrics-core-3.1.2.jar:/usr/share/cassandra/lib/metrics-graphite-2.2.0.jar:/usr/share/cassandra/lib/metrics-graphite-3.1.2.jar:/usr/share/cassandra/lib/metrics-logback-3.1.0.jar:/usr/share/cassandra/lib/netty-all-4.0.23.Final.jar:/usr/share/cassandra/lib/ohc-core-0.4.2.jar:/usr/share/cassandra/lib/ohc-core-j8-0.4.2.jar:/usr/share/cassandra/lib/reporter-config-base-3.0.0.jar:/usr/share/cassandra/lib/reporter-config3-3.0.0.jar:/usr/share/cassandra/lib/sigar-1.6.4.jar:/usr/share/cassandra/lib/slf4j-api-1.7.7.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.1.1.7.jar:/usr/share/cassandra/lib/stream-2.5.2.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-3.2.jar:/usr/share/cassandra/apache-cassandra-thrift-3.2.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:
 -XX:HeapDumpPath=/var/lib/cassandra/java_1453248542.hprof 
-XX:ErrorFile=/var/lib/cassandra/hs_err_1453248542.log 
org.apache.cassandra.service.CassandraDaemon
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11038) Is node being restarted treated as node joining?

2016-01-19 Thread cheng ren (JIRA)
cheng ren created CASSANDRA-11038:
-

 Summary: Is node being restarted treated as node joining?
 Key: CASSANDRA-11038
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11038
 Project: Cassandra
  Issue Type: Bug
Reporter: cheng ren


Hi, 
What we found recently is that every time we restart a node, all other nodes in 
the cluster treat the restarted node as a new node joining and issue node 
joining notification to clients. We have traced the code path being hit when a 
peer node detected a restarted node:

src/java/org/apache/cassandra/gms/Gossiper.java
{code}
private void handleMajorStateChange(InetAddress ep, EndpointState epState)
{
if (!isDeadState(epState))
{
if (endpointStateMap.get(ep) != null)
logger.info("Node {} has restarted, now UP", ep);
else
logger.info("Node {} is now part of the cluster", ep);
}
if (logger.isTraceEnabled())
logger.trace("Adding endpoint state for " + ep);
endpointStateMap.put(ep, epState);

// the node restarted: it is up to the subscriber to take whatever 
action is necessary
for (IEndpointStateChangeSubscriber subscriber : subscribers)
subscriber.onRestart(ep, epState);

if (!isDeadState(epState))
markAlive(ep, epState);
else
{
logger.debug("Not marking " + ep + " alive due to dead state");
markDead(ep, epState);
}
for (IEndpointStateChangeSubscriber subscriber : subscribers)
subscriber.onJoin(ep, epState);
}

{code}

subscriber.onJoin(ep, epState) ends up with calling onJoinCluster in Server.java

{code}
src/java/org/apache/cassandra/transport/Server.java
public void onJoinCluster(InetAddress endpoint)
{
server.connectionTracker.send(Event.TopologyChange.newNode(getRpcAddress(endpoint),
 server.socket.getPort()));
}
{code}

We have a full trace of code path and skip some intermedia function calls here 
for being brief. 

Upon receiving the node joining notification, clients would go and scan system 
peer table to fetch the latest topology information. Since we have tens of 
thousands of client connections, scans from all of them put an enormous load to 
our cluster. 

Although in the newer version of driver, client skips fetching peer table if 
the new node has already existed in local metadata, we are still curious why 
node being restarted is handled as node joining on server side? Did we hit a 
bug or this is the way supposed to be? Our old java driver version is 1.0.4 and 
cassandra version is 2.0.12.

Thanks!




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11037) cqlsh bash script cannot be called through symlink

2016-01-19 Thread Benjamin Zarzycki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107740#comment-15107740
 ] 

Benjamin Zarzycki commented on CASSANDRA-11037:
---

I'm new to JIRA and contributing to Apache Cassandra, so if I did something 
wrong, please let me know.

> cqlsh bash script cannot be called through symlink
> --
>
> Key: CASSANDRA-11037
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11037
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: BASH
>Reporter: Benjamin Zarzycki
>Priority: Trivial
>  Labels: easyfix, newbie
> Fix For: 2.2.0
>
> Attachments: 
> 0001-Allows-bash-script-to-be-executed-through-symlinks.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> cqlsh bash script cannot be called through a symlink



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11037) cqlsh bash script cannot be called through symlink

2016-01-19 Thread Benjamin Zarzycki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107738#comment-15107738
 ] 

Benjamin Zarzycki commented on CASSANDRA-11037:
---

I setup a branch on GitHub with the fix: 
https://github.com/kf6nux/cassandra/tree/11037-2.2

> cqlsh bash script cannot be called through symlink
> --
>
> Key: CASSANDRA-11037
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11037
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: BASH
>Reporter: Benjamin Zarzycki
>Priority: Trivial
> Attachments: 
> 0001-Allows-bash-script-to-be-executed-through-symlinks.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> cqlsh bash script cannot be called through a symlink



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11037) cqlsh bash script cannot be called through symlink

2016-01-19 Thread Benjamin Zarzycki (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Zarzycki updated CASSANDRA-11037:
--
Reviewer:   (was: Benjamin Zarzycki)

> cqlsh bash script cannot be called through symlink
> --
>
> Key: CASSANDRA-11037
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11037
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: BASH
>Reporter: Benjamin Zarzycki
>Priority: Trivial
> Attachments: 
> 0001-Allows-bash-script-to-be-executed-through-symlinks.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> cqlsh bash script cannot be called through a symlink



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11037) cqlsh bash script cannot be called through symlink

2016-01-19 Thread Benjamin Zarzycki (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Zarzycki updated CASSANDRA-11037:
--
Reviewer: Benjamin Zarzycki

> cqlsh bash script cannot be called through symlink
> --
>
> Key: CASSANDRA-11037
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11037
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: BASH
>Reporter: Benjamin Zarzycki
>Priority: Trivial
> Attachments: 
> 0001-Allows-bash-script-to-be-executed-through-symlinks.patch
>
>   Original Estimate: 0h
>  Remaining Estimate: 0h
>
> cqlsh bash script cannot be called through a symlink



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11037) cqlsh bash script cannot be called through symlink

2016-01-19 Thread Benjamin Zarzycki (JIRA)
Benjamin Zarzycki created CASSANDRA-11037:
-

 Summary: cqlsh bash script cannot be called through symlink
 Key: CASSANDRA-11037
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11037
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: BASH
Reporter: Benjamin Zarzycki
Priority: Trivial
 Attachments: 
0001-Allows-bash-script-to-be-executed-through-symlinks.patch

cqlsh bash script cannot be called through a symlink



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9949) maxPurgeableTimestamp needs to check memtables too

2016-01-19 Thread Stefania (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107722#comment-15107722
 ] 

Stefania commented on CASSANDRA-9949:
-

Thanks, starting with the 3.0+ patch then.

> maxPurgeableTimestamp needs to check memtables too
> --
>
> Key: CASSANDRA-9949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9949
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Ellis
>Assignee: Stefania
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> overlapIterator/maxPurgeableTimestamp don't include the memtables, so a 
> very-out-of-order write could be ignored



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11036) Failing to format MAP type where the key is UDT and the value is another MAP

2016-01-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107675#comment-15107675
 ] 

Cédric Hernalsteens commented on CASSANDRA-11036:
-

And now I can't reproduce. Could someone confirm that this should be supported?

> Failing to format MAP type where the key is UDT and the value is another MAP
> 
>
> Key: CASSANDRA-11036
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11036
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
> Environment: [cqlsh 5.0.1 | Cassandra 2.2.3 | CQL spec 3.3.1 | Native 
> protocol v4]
>Reporter: Cédric Hernalsteens
>Priority: Minor
> Fix For: 2.2.3
>
>
> A column defined as
> MAP,FROZEN>> STATIC
> with the UDT 'cycle' being
> CREATE TYPE kepler.cycle (
>   machine   TEXT,
>   injection INT,
>   cyclestampTIMESTAMP
> );
> generates the following error in cqlsh:
> Failed to format value OrderedMapSerializedKey([(kepler_cycle(machine=u'PS', 
> injection=1), OrderedMapSerializedKey([(u'selector', u'CPS.USER.MD8'), 
> (u'seqnumber', u'21')]))]) : "kepler_cycle(machine=u'PS', injection=1)"
> (I left my actual in there, I doubt that's sensitive).
> The row shows
> OrderedMapSerializedKey([(kepler_cycle(machine=u'PS', injection=1), 
> OrderedMapSerializedKey([(u'selector', u'CPS.USER.MD8'), (u'seqnumber', 
> u'22')]))])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11036) Failing to format MAP type where the key is UDT and the value is another MAP

2016-01-19 Thread JIRA
Cédric Hernalsteens created CASSANDRA-11036:
---

 Summary: Failing to format MAP type where the key is UDT and the 
value is another MAP
 Key: CASSANDRA-11036
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11036
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
 Environment: [cqlsh 5.0.1 | Cassandra 2.2.3 | CQL spec 3.3.1 | Native 
protocol v4]
Reporter: Cédric Hernalsteens
Priority: Minor
 Fix For: 2.2.3


A column defined as

MAP,FROZEN>> STATIC

with the UDT 'cycle' being

CREATE TYPE kepler.cycle (
machine   TEXT,
injection INT,
cyclestampTIMESTAMP
);

generates the following error in cqlsh:


Failed to format value OrderedMapSerializedKey([(kepler_cycle(machine=u'PS', 
injection=1), OrderedMapSerializedKey([(u'selector', u'CPS.USER.MD8'), 
(u'seqnumber', u'21')]))]) : "kepler_cycle(machine=u'PS', injection=1)"

(I left my actual in there, I doubt that's sensitive).

The row shows

OrderedMapSerializedKey([(kepler_cycle(machine=u'PS', injection=1), 
OrderedMapSerializedKey([(u'selector', u'CPS.USER.MD8'), (u'seqnumber', 
u'22')]))])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10707) Add support for Group By to Select statement

2016-01-19 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107583#comment-15107583
 ] 

 Brian Hess commented on CASSANDRA-10707:
-

I think that supporting grouping by clustering column (or perhaps even a 
regular column) with a partition key predicate is a good idea.  

I think that supporting grouping by partition key (either in part, or in toto) 
is a bad idea.  In that query, all the data in the cluster would stream to the 
coordinator who would then be responsible for doing a *lot* of processing.  In 
other distributed systems that do GROUP BY queries, the groups end up being 
split up among the nodes in the system and each node is responsible for rolling 
up the data for those groups it was assigned.  This is a common way to get all 
the nodes in the system to help with a pretty significant computation - and the 
data streamed out (potentially via a single node in the system) to the client.  
However, in this approach, all the data is streaming to a single node and that 
node is doing all the work, for all the groups.  This feels like either a ton 
of work to orchestrate the computation (that would start to mimic other systems 
- e.g., Spark) or would do a lot of work and risk being very inefficient and 
slow.  I am also concerned to what this would do in the face of 
QueryTimeoutException - would we really be able to do a GROUP BY partitionKey 
aggregate under the QTE limit?


> Add support for Group By to Select statement
> 
>
> Key: CASSANDRA-10707
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10707
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>
> Now that Cassandra support aggregate functions, it makes sense to support 
> {{GROUP BY}} on the {{SELECT}} statements.
> It should be possible to group either at the partition level or at the 
> clustering column level.
> {code}
> SELECT partitionKey, max(value) FROM myTable GROUP BY partitionKey;
> SELECT partitionKey, clustering0, clustering1, max(value) FROM myTable GROUP 
> BY partitionKey, clustering0, clustering1; 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10070) Automatic repair scheduling

2016-01-19 Thread Jonathan Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107570#comment-15107570
 ] 

Jonathan Ellis commented on CASSANDRA-10070:


Marcus, I think Russell has made some very valuable suggestions as to the kind 
of complications we need to be thinking about here.

Before jumping back to another patch, I think it would be useful to put 
together a high level design document that thinks through these questions and 
proposes approaches to deal with them.  Then we can get feedback to you faster 
than at the level of actual code.

> Automatic repair scheduling
> ---
>
> Key: CASSANDRA-10070
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10070
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Olsson
>Assignee: Marcus Olsson
>Priority: Minor
> Fix For: 3.x
>
>
> Scheduling and running repairs in a Cassandra cluster is most often a 
> required task, but this can both be hard for new users and it also requires a 
> bit of manual configuration. There are good tools out there that can be used 
> to simplify things, but wouldn't this be a good feature to have inside of 
> Cassandra? To automatically schedule and run repairs, so that when you start 
> up your cluster it basically maintains itself in terms of normal 
> anti-entropy, with the possibility for manual configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11035) Use cardinality estimation to pick better compaction candidates for STCS (SizeTieredCompactionStrategy)

2016-01-19 Thread Wei Deng (JIRA)
Wei Deng created CASSANDRA-11035:


 Summary: Use cardinality estimation to pick better compaction 
candidates for STCS (SizeTieredCompactionStrategy)
 Key: CASSANDRA-11035
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11035
 Project: Cassandra
  Issue Type: Improvement
  Components: Compaction
Reporter: Wei Deng


This was initially mentioned in this blog post 
http://www.datastax.com/dev/blog/improving-compaction-in-cassandra-with-cardinality-estimation
 but I couldn't find any existing JIRA for it. As stated by [~jbellis], 
"Potentially even more useful would be using cardinality estimation to pick 
better compaction candidates. Instead of blindly merging sstables of a similar 
size a la SizeTieredCompactionStrategy." The L0 STCS in LCS should benefit as 
well.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11033) Prevent logging in sandboxed state

2016-01-19 Thread DOAN DuyHai (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107501#comment-15107501
 ] 

DOAN DuyHai commented on CASSANDRA-11033:
-

Ok, good to know and to document. Thank you [~snazy] for the good catch, it's 
pretty tricky

> Prevent logging in sandboxed state
> --
>
> Key: CASSANDRA-11033
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11033
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.x
>
>
> logback will re-read its configuration file regularly. So it is possible that 
> logback tries to reload the configuration while we log from a sandboxed UDF, 
> which will fail due to the restricted access privileges for UDFs. UDAs are 
> also affected as these use UDFs.
> /cc [~doanduyhai]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10948) CQLSH error when trying to insert non-ascii statement

2016-01-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107467#comment-15107467
 ] 

Paulo Motta edited comment on CASSANDRA-10948 at 1/19/16 9:26 PM:
--

Tests look good, marking as ready to commit. Thanks!

Committer: 2.2 patch merges cleanly upwards.


was (Author: pauloricardomg):
Tests look good, marking as ready to commit. Thanks!

> CQLSH error when trying to insert non-ascii statement
> -
>
> Key: CASSANDRA-10948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matthieu Nantern
>Assignee: Matthieu Nantern
>Priority: Minor
>  Labels: lhf
> Attachments: patch_CASSANDRA-10948
>
>
> We recently upgraded Cassandra to v2.2.4 with CQLSH 5.0.1 and we are now 
> unable to import some CQL file (with French character like 'ê'). It was 
> working on v2.0.12.
> The issue:
> {noformat}
> Using CQL driver:  '/OPT/cassandra/dsc-cassandra-2.2.4/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Traceback (most recent call last):
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1110, in onecmd
> self.handle_statement(st, statementtext)
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1135, in 
> handle_statement
> readline.add_history(new_hist)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xea' in position 
> 7192: ordinal not in range(128)
> {noformat}
> The issue was corrected by changing line 1135 of cqlsh.py (but I don't know 
> if it's the correct way to do it):
> readline.add_history(new_hist)  -> 
> readline.add_history(new_hist.encode('utf8'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10948) CQLSH error when trying to insert non-ascii statement

2016-01-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107467#comment-15107467
 ] 

Paulo Motta commented on CASSANDRA-10948:
-

Tests look good, marking as ready to commit. Thanks!

> CQLSH error when trying to insert non-ascii statement
> -
>
> Key: CASSANDRA-10948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matthieu Nantern
>Assignee: Matthieu Nantern
>Priority: Minor
>  Labels: lhf
> Attachments: patch_CASSANDRA-10948
>
>
> We recently upgraded Cassandra to v2.2.4 with CQLSH 5.0.1 and we are now 
> unable to import some CQL file (with French character like 'ê'). It was 
> working on v2.0.12.
> The issue:
> {noformat}
> Using CQL driver:  '/OPT/cassandra/dsc-cassandra-2.2.4/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Traceback (most recent call last):
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1110, in onecmd
> self.handle_statement(st, statementtext)
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1135, in 
> handle_statement
> readline.add_history(new_hist)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xea' in position 
> 7192: ordinal not in range(128)
> {noformat}
> The issue was corrected by changing line 1135 of cqlsh.py (but I don't know 
> if it's the correct way to do it):
> readline.add_history(new_hist)  -> 
> readline.add_history(new_hist.encode('utf8'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11033) Prevent logging in sandboxed state

2016-01-19 Thread Robert Stupp (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107455#comment-15107455
 ] 

Robert Stupp commented on CASSANDRA-11033:
--

As a workaround it should work to set {{scan="false"}} in {{logbook.xml}}: 
{{}}

> Prevent logging in sandboxed state
> --
>
> Key: CASSANDRA-11033
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11033
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Robert Stupp
>Assignee: Robert Stupp
>Priority: Minor
> Fix For: 3.0.x
>
>
> logback will re-read its configuration file regularly. So it is possible that 
> logback tries to reload the configuration while we log from a sandboxed UDF, 
> which will fail due to the restricted access privileges for UDFs. UDAs are 
> also affected as these use UDFs.
> /cc [~doanduyhai]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11034) consistent_reads_after_move_test is failing on trunk

2016-01-19 Thread Philip Thompson (JIRA)
Philip Thompson created CASSANDRA-11034:
---

 Summary: consistent_reads_after_move_test is failing on trunk
 Key: CASSANDRA-11034
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11034
 Project: Cassandra
  Issue Type: Bug
Reporter: Philip Thompson
 Fix For: 3.x
 Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, 
node3.log, node3_debug.log

The novnode dtest 
{{consistent_bootstrap_test.TestBootstrapConsistency.consistent_reads_after_move_test}}
 is failing on trunk. See an example failure 
[here|http://cassci.datastax.com/job/trunk_novnode_dtest/274/testReport/consistent_bootstrap_test/TestBootstrapConsistency/consistent_reads_after_move_test/].

On trunk I am getting an OOM of one of my C* nodes [node3], which is what 
causes the nodetool move to fail. Logs are attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9778) CQL support for time series aggregation

2016-01-19 Thread Brian Hess (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107407#comment-15107407
 ] 

 Brian Hess commented on CASSANDRA-9778:


[~blerer] - that really isn't an example of window functions (A/K/A window 
aggregates, window aggregate functions, etc).  That's really an example of a 
grouped aggregate with time functions (Floor, Minute, Hour, etc).  The 
cardinality of the output of this query is that the number of rows equals the 
number of groups.  Whereas, with window functions the cardinality of the output 
is that the number of rows equals the number of rows of input.

Let me simplify your trades example to daily stock prices with a schema of 
(symbol TEXT, transDate DATE, closePrice DOUBLE).  For each stock you'd like 
the sliding 3-day average of the stock closing prices.  You would do that with 
the following SQL-99 syntax:
SELECT symbol, transDate, closePrice, Avg(closePrice) OVER (PARTITION BY symbol 
ORDER BY closePrice ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) AS 
threeDayAverage FROM stocks WHERE symbol = 'XYZ';

Here, each day will have a "window" of rows that stretches from 2 rows before 
(if they exist) to the current row, and the value is the average of the three 
closePrice values.  Thus, there is an output for every row of input.  For 
Thursday's threeDayAverage for stock XYZ we will use the closePrice from 
Tuesday, Wednesday, and Thursday.  For Friday's threeDayAverage for stock XYZ 
we will use the closePrice from Wednesday, Thursday, and Friday.  And so on.

For what it's worth, this is not hard to do if there is a partition key 
predicate.  We are simply doing a pass over the rows to return to the client 
and rolling things up.  It is possible we need to sort the data depending on 
the ORDER BY clause, but otherwise the aggregation is a simple rollup.  It 
should be noted that SQL allows for very flexible window specifications that 
can cause trouble, such as
"OVER (PARTITION BY symbol ORDER BY transDate ROWS BETWEEN CURRENT ROW AND 
UNBOUNDED FOLLOWING)"
which would go from the current row to the end of the partition.  That can be a 
tricky case.  SQL99 also supports RANGE window specifications in addition to 
ROW specifications.  That can also be tricky.

That said, window functions would be a nice addition (especially with a 
partition key predicate).


> CQL support for time series aggregation
> ---
>
> Key: CASSANDRA-9778
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9778
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL
>Reporter: Jonathan Ellis
>Assignee: Benjamin Lerer
> Fix For: 3.x
>
>
> Along with MV (CASSANDRA-6477), time series aggregation or "rollups" are a 
> common design pattern in cassandra applications.  I'd like to add CQL support 
> for this along these lines:
> {code}
> CREATE MATERIALIZED VIEW stocks_by_hour AS
> SELECT exchange, day, day_time(1h) AS hour, symbol, avg(price), sum(volume)
> FROM stocks
> GROUP BY exchange, day, symbol, hour
> PRIMARY KEY  ((exchange, day), hour, symbol);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2016-01-19 Thread Anubhav Kale (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anubhav Kale updated CASSANDRA-10907:
-
Attachment: 0001-Skip-Flush-option-for-Snapshot.patch

I initally went down the route of boolean option (didn't quite like it myself 
but felt less weird than inspecting array elements). I have changed that now. 

Addressed other comments and made the tests robust.

> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
> Attachments: 0001-Skip-Flush-for-snapshots.patch, 
> 0001-Skip-Flush-option-for-Snapshot.patch, 
> 0001-Skip-Flush-option-for-Snapshot.patch, 0001-flush.patch
>
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11033) Prevent logging in sandboxed state

2016-01-19 Thread Robert Stupp (JIRA)
Robert Stupp created CASSANDRA-11033:


 Summary: Prevent logging in sandboxed state
 Key: CASSANDRA-11033
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11033
 Project: Cassandra
  Issue Type: Bug
Reporter: Robert Stupp
Assignee: Robert Stupp
Priority: Minor
 Fix For: 3.0.x


logback will re-read its configuration file regularly. So it is possible that 
logback tries to reload the configuration while we log from a sandboxed UDF, 
which will fail due to the restricted access privileges for UDFs. UDAs are also 
affected as these use UDFs.

/cc [~doanduyhai]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10907) Nodetool snapshot should provide an option to skip flushing

2016-01-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107351#comment-15107351
 ] 

Paulo Motta commented on CASSANDRA-10907:
-

Looking better. A few more nits:
* Rename {{skipflush}} option to {{skipFlush}} (camelCase)
* remove skipFlush from takeMultipleTableSnapshot javadoc
* add @Deprecated annotation to old methods (in addition to @deprecated javadoc)
* in javadoc {{@link #takeSnapshot..}} replace {{Map}} with 
{{Map}} (generics is not supported in javadoc link)
* Add options to message: {{Requested creating snapshot(s) for 
\[keyspace1.standard1,keyspace1.counter1\] with snapshot name \[1453233210025\] 
and options \{skipFlush=false\}.}}
* Fix broken test 
{{org.apache.cassandra.service.StorageServiceServerTest.testTableSnapshot}}
* Improve nodetool option description from {{Skip blocking flush of the 
memtable}} to {{Do not flush memtables before snapshotting (snapshot will not 
contain unflushed data)}}

bq. I did add a Boolean to detect if KS / CF was passed to the proposed 
signature to make things easy. 

I still find it a bit redudant, since it's possible to replace the keyspaces 
boolean with {{entities\[0\].contains(".")}}, and in the future we can simplify 
the snapshot command to accept an arbitrary list of mixed keyspaces and/or 
tables, so I'd prefer to not have this boolean.

||trunk||
|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-10907]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10907-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10907-dtest/lastCompletedBuild/testReport/]|


> Nodetool snapshot should provide an option to skip flushing
> ---
>
> Key: CASSANDRA-10907
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10907
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
> Environment: PROD
>Reporter: Anubhav Kale
>Priority: Minor
>  Labels: lhf
> Attachments: 0001-Skip-Flush-for-snapshots.patch, 
> 0001-Skip-Flush-option-for-Snapshot.patch, 0001-flush.patch
>
>
> For some practical scenarios, it doesn't matter if the data is flushed to 
> disk before taking a snapshot. However, it's better to save some flushing 
> time to make snapshot process quick.
> As such, it will be a good idea to provide this option to snapshot command. 
> The wiring from nodetool to MBean to VerbHandler should be easy. 
> I can provide a patch if this makes sense.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9752) incremental repair dtest flaps on 2.2

2016-01-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107227#comment-15107227
 ] 

Jim Witschey commented on CASSANDRA-9752:
-

I'm not sure how it's set in {{ccm}} if you don't set it explicitly. 
[~philipthompson]? On CassCI they're set as such:

{code}
export CCM_MAX_HEAP_SIZE=1024M
export CCM_HEAP_NEWSIZE=100M
{code}

> incremental repair dtest flaps on 2.2 
> --
>
> Key: CASSANDRA-9752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>
> {{incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test}} 
> flaps on 2.2. It's hard to tell what failures are repair-specific, but there 
> are a few distinct failures I've seen recently:
> - [an NPE in 
> StorageService|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/143/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> - [an NPE in 
> SSTableRewriter|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/135/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/].
>  I believe this is related to CASSANDRA-9730, but someone should confirm this.
> - [an on-disk data size that is too 
> large|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/133/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> You can find the test itself [here on 
> GitHub|https://github.com/riptano/cassandra-dtest/blob/master/incremental_repair_test.py#L206]
>  and run it with the command
> {code}
> CASSANDRA_VERSION=git:trunk nosetests 
> incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test
> {code}
> Assigning [~yukim], since you're the repair person, but feel free to reassign 
> to whoever's appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r1725593 - in /cassandra/site: publish/download/index.html src/content/download/index.html

2016-01-19 Thread jbellis
Author: jbellis
Date: Tue Jan 19 19:03:40 2016
New Revision: 1725593

URL: http://svn.apache.org/viewvc?rev=1725593&view=rev
Log:
spell out DDC correctly

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/content/download/index.html

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1725593&r1=1725592&r2=1725593&view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Tue Jan 19 19:03:40 2016
@@ -132,7 +132,7 @@
   Third Party Distributions (not endorsed by Apache)
 
   
-http://www.planetcassandra.org/cassandra/";>DataStax 
Distribution of Cassandra is available in Linux rpm, deb, and tar packages, 
a Windows MSI installer, and a Mac OS X binary.
+http://www.planetcassandra.org/cassandra/";>DataStax 
Distribution of Apache Cassandra is available in Linux rpm, deb, and tar 
packages, a Windows MSI installer, and a Mac OS X binary.
   
 
   CQL

Modified: cassandra/site/src/content/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/content/download/index.html?rev=1725593&r1=1725592&r2=1725593&view=diff
==
--- cassandra/site/src/content/download/index.html (original)
+++ cassandra/site/src/content/download/index.html Tue Jan 19 19:03:40 2016
@@ -89,7 +89,7 @@
   Third Party Distributions (not endorsed by Apache)
 
   
-http://www.planetcassandra.org/cassandra/";>DataStax 
Distribution of Cassandra is available in Linux rpm, deb, and tar packages, 
a Windows MSI installer, and a Mac OS X binary.
+http://www.planetcassandra.org/cassandra/";>DataStax 
Distribution of Apache Cassandra is available in Linux rpm, deb, and tar 
packages, a Windows MSI installer, and a Mac OS X binary.
   
 
   CQL




[jira] [Updated] (CASSANDRA-10948) CQLSH error when trying to insert non-ascii statement

2016-01-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-10948:

Assignee: Matthieu Nantern

> CQLSH error when trying to insert non-ascii statement
> -
>
> Key: CASSANDRA-10948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matthieu Nantern
>Assignee: Matthieu Nantern
>Priority: Minor
>  Labels: lhf
> Attachments: patch_CASSANDRA-10948
>
>
> We recently upgraded Cassandra to v2.2.4 with CQLSH 5.0.1 and we are now 
> unable to import some CQL file (with French character like 'ê'). It was 
> working on v2.0.12.
> The issue:
> {noformat}
> Using CQL driver:  '/OPT/cassandra/dsc-cassandra-2.2.4/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Traceback (most recent call last):
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1110, in onecmd
> self.handle_statement(st, statementtext)
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1135, in 
> handle_statement
> readline.add_history(new_hist)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xea' in position 
> 7192: ordinal not in range(128)
> {noformat}
> The issue was corrected by changing line 1135 of cqlsh.py (but I don't know 
> if it's the correct way to do it):
> readline.add_history(new_hist)  -> 
> readline.add_history(new_hist.encode('utf8'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


svn commit: r1725592 - in /cassandra/site: publish/download/index.html src/content/download/index.html

2016-01-19 Thread jbellis
Author: jbellis
Date: Tue Jan 19 19:02:14 2016
New Revision: 1725592

URL: http://svn.apache.org/viewvc?rev=1725592&view=rev
Log:
update DataStax distribution link

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/content/download/index.html

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1725592&r1=1725591&r2=1725592&view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Tue Jan 19 19:02:14 2016
@@ -132,7 +132,7 @@
   Third Party Distributions (not endorsed by Apache)
 
   
-http://www.datastax.com/products/community";>DataStax 
Community is available in Linux rpm, deb, and tar packages, a Windows MSI 
installer, and a Mac OS X binary.
+http://www.planetcassandra.org/cassandra/";>DataStax 
Distribution of Cassandra is available in Linux rpm, deb, and tar packages, 
a Windows MSI installer, and a Mac OS X binary.
   
 
   CQL

Modified: cassandra/site/src/content/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/content/download/index.html?rev=1725592&r1=1725591&r2=1725592&view=diff
==
--- cassandra/site/src/content/download/index.html (original)
+++ cassandra/site/src/content/download/index.html Tue Jan 19 19:02:14 2016
@@ -89,7 +89,7 @@
   Third Party Distributions (not endorsed by Apache)
 
   
-http://www.datastax.com/products/community";>DataStax 
Community is available in Linux rpm, deb, and tar packages, a Windows MSI 
installer, and a Mac OS X binary.
+http://www.planetcassandra.org/cassandra/";>DataStax 
Distribution of Cassandra is available in Linux rpm, deb, and tar packages, 
a Windows MSI installer, and a Mac OS X binary.
   
 
   CQL




[jira] [Resolved] (CASSANDRA-10552) Pluggable IResources

2016-01-19 Thread Aleksey Yeschenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko resolved CASSANDRA-10552.
---
   Resolution: Not A Problem
Fix Version/s: (was: 3.x)

Resoling as {{Not A Problem}} for now, as I've been told it's not necessary 
anymore.

> Pluggable IResources
> 
>
> Key: CASSANDRA-10552
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10552
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Anthony Cozzie
>Assignee: Anthony Cozzie
> Attachments: cassandra-3.0.0-10552.txt
>
>
> It is impossible to add new IResources because of the static method 
> Resources.fromName(), which creates IResources from the text values in the 
> authentication tables.  This patch replaces the static list of checks with a 
> hash table that can be extended.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10948) CQLSH error when trying to insert non-ascii statement

2016-01-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107195#comment-15107195
 ] 

Paulo Motta commented on CASSANDRA-10948:
-

+1, just minor change to use {{self.encoding}} instead of hard-coding 
{{utf-8}}, to avoid problems when using other encodings.

Will mark as ready to commit once tests look good:
||2.2||3.0||3.3||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-10948]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-10948]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.3...pauloricardomg:3.3-10948]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-10948]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10948-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10948-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.3-10948-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10948-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-10948-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-10948-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.3-10948-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-10948-dtest/lastCompletedBuild/testReport/]|

Thanks!


> CQLSH error when trying to insert non-ascii statement
> -
>
> Key: CASSANDRA-10948
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10948
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Matthieu Nantern
>Priority: Minor
>  Labels: lhf
> Attachments: patch_CASSANDRA-10948
>
>
> We recently upgraded Cassandra to v2.2.4 with CQLSH 5.0.1 and we are now 
> unable to import some CQL file (with French character like 'ê'). It was 
> working on v2.0.12.
> The issue:
> {noformat}
> Using CQL driver:  '/OPT/cassandra/dsc-cassandra-2.2.4/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/__init__.py'>
> Using connect timeout: 5 seconds
> Traceback (most recent call last):
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1110, in onecmd
> self.handle_statement(st, statementtext)
>   File "/OPT/cassandra/dsc-cassandra/bin/cqlsh.py", line 1135, in 
> handle_statement
> readline.add_history(new_hist)
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xea' in position 
> 7192: ordinal not in range(128)
> {noformat}
> The issue was corrected by changing line 1135 of cqlsh.py (but I don't know 
> if it's the correct way to do it):
> readline.add_history(new_hist)  -> 
> readline.add_history(new_hist.encode('utf8'))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9752) incremental repair dtest flaps on 2.2

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107189#comment-15107189
 ] 

Marcus Eriksson commented on CASSANDRA-9752:


it OOMs on my machine, how do we set heap size in ccm/dtest? Just based on 
machine RAM?

> incremental repair dtest flaps on 2.2 
> --
>
> Key: CASSANDRA-9752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>
> {{incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test}} 
> flaps on 2.2. It's hard to tell what failures are repair-specific, but there 
> are a few distinct failures I've seen recently:
> - [an NPE in 
> StorageService|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/143/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> - [an NPE in 
> SSTableRewriter|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/135/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/].
>  I believe this is related to CASSANDRA-9730, but someone should confirm this.
> - [an on-disk data size that is too 
> large|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/133/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> You can find the test itself [here on 
> GitHub|https://github.com/riptano/cassandra-dtest/blob/master/incremental_repair_test.py#L206]
>  and run it with the command
> {code}
> CASSANDRA_VERSION=git:trunk nosetests 
> incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test
> {code}
> Assigning [~yukim], since you're the repair person, but feel free to reassign 
> to whoever's appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9752) incremental repair dtest flaps on 2.2

2016-01-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107170#comment-15107170
 ] 

Jim Witschey commented on CASSANDRA-9752:
-

[~krummas] didn't occur to me to ask earlier -- what environment does it OOM in?

> incremental repair dtest flaps on 2.2 
> --
>
> Key: CASSANDRA-9752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>
> {{incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test}} 
> flaps on 2.2. It's hard to tell what failures are repair-specific, but there 
> are a few distinct failures I've seen recently:
> - [an NPE in 
> StorageService|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/143/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> - [an NPE in 
> SSTableRewriter|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/135/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/].
>  I believe this is related to CASSANDRA-9730, but someone should confirm this.
> - [an on-disk data size that is too 
> large|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/133/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> You can find the test itself [here on 
> GitHub|https://github.com/riptano/cassandra-dtest/blob/master/incremental_repair_test.py#L206]
>  and run it with the command
> {code}
> CASSANDRA_VERSION=git:trunk nosetests 
> incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test
> {code}
> Assigning [~yukim], since you're the repair person, but feel free to reassign 
> to whoever's appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-11032) Full trace returned on ReadFailure

2016-01-19 Thread Chris Splinter (JIRA)
Chris Splinter created CASSANDRA-11032:
--

 Summary: Full trace returned on ReadFailure
 Key: CASSANDRA-11032
 URL: https://issues.apache.org/jira/browse/CASSANDRA-11032
 Project: Cassandra
  Issue Type: Bug
  Components: Tools
Reporter: Chris Splinter
Priority: Minor


I noticed that the full traceback is returned on a read failure where I 
expected this to be a one line exception with the ReadFailure message. It is 
minor, but would it be better to only return the ReadFailure details?

{code}
cqlsh> SELECT * FROM test_encryption_ks.test_bad_table;
Traceback (most recent call last):
  File "/usr/local/lib/dse/bin/../resources/cassandra/bin/cqlsh.py", line 1276, 
in perform_simple_statement
result = future.result()
  File 
"/usr/local/lib/dse/resources/cassandra/bin/../lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
 line 3122, in result
raise self._final_exception
ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation 
failed - received 0 responses and 1 failures" info={'failures': 1, 
'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-11030) utf-8 characters incorrectly displayed/inserted on cqlsh on Windows

2016-01-19 Thread Paulo Motta (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paulo Motta updated CASSANDRA-11030:

Summary: utf-8 characters incorrectly displayed/inserted on cqlsh on 
Windows  (was: non-ascii characters incorrectly displayed/inserted on cqlsh on 
Windows)

> utf-8 characters incorrectly displayed/inserted on cqlsh on Windows
> ---
>
> Key: CASSANDRA-11030
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11030
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Paulo Motta
>Assignee: Paulo Motta
>Priority: Minor
>  Labels: cqlsh, windows
>
> {noformat}
> C:\Users\Paulo\Repositories\cassandra [2.2-10948 +6 ~1 -0 !]> .\bin\cqlsh.bat 
> --encoding utf-8
> Connected to test at 127.0.0.1:9042.
> [cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
> Use HELP for help.
> cqlsh> INSERT INTO bla.test (bla ) VALUES  ('não') ;
> cqlsh> select * from bla.test;
>  bla
> -
>  n?o
> (1 rows)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11030) non-ascii characters incorrectly displayed/inserted on cqlsh on Windows

2016-01-19 Thread Paulo Motta (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15107129#comment-15107129
 ] 

Paulo Motta commented on CASSANDRA-11030:
-

There are two issues at play here. The first is that the default Windows 
terminal encoding is not {{utf-8}}, so in order to display/input {{utf-8}} 
characters you must set the terminal encoding (code page in Windows 
nomenclature) to {{cp65001}}, by issuing the command {{chcp 65001}} before 
starting cqlsh. The second issue is that there is no codec for {{cp65001}} in 
python < 3.3 (this was fixed in issue 
[13216|https://bugs.python.org/issue13216] in Python 
[3.3+|https://docs.python.org/dev/whatsnew/3.3.html#codecs]). A known 
workaround is to register a copy of the {{utf-8}} codec to encode/decode 
{{cp65001}}.

So, if the platform is native windows (the issue doesn't happen on cygwin), and 
the encoding is set to {{utf-8}} but the terminal encoding is not {{cp65001}}, 
a warning is print for the user to change its codepoint to {{cp65001}} to 
support {{utf-8}} encoding. Furthermore, if the {{cp650001}} is the default 
encoding and the python version is less than 3.3, the {{utf-8}} codec is 
registered as {{cp65001}}.

||2.2||3.0||3.3||trunk||
|[branch|https://github.com/apache/cassandra/compare/cassandra-2.2...pauloricardomg:2.2-11030]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...pauloricardomg:3.0-11030]|[branch|https://github.com/apache/cassandra/compare/cassandra-3.3...pauloricardomg:3.3-11030]|[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-11030]|
|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-11030-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-11030-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.3-11030-testall/lastCompletedBuild/testReport/]|[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11030-testall/lastCompletedBuild/testReport/]|
|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-2.2-11030-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.0-11030-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-3.3-11030-dtest/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-11030-dtest/lastCompletedBuild/testReport/]|

Below is a sample execution with different encoding variations (default vs 
utf-8/cp65001):

{noformat}
C:\Users\Paulo\Repositories\cassandra [cassandra-2.2 +8 ~1 -0 !]> bin\cqlsh.bat
Connected to test at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> select * from bla.test;

 bla
--
 joπo ßlcides
  bla
 nπoτ

(3 rows)
cqlsh> select * from bla.test where bla = 'nãoç';

 bla
-

(0 rows)
cqlsh> exit;
C:\Users\Paulo\Repositories\cassandra [cassandra-2.2 +8 ~1 -0 !]> bin\cqlsh.bat 
--encoding utf-8

WARNING: console codepage must be set to cp65001 to support utf-8 encoding on 
Windows platforms.
If you experience encoding problems, change your console codepage with 'chcp 
65001' before starting cqlsh.

Connected to test at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> select * from bla.test;

 bla
--
 joão álcides
  bla
 nãoç

(3 rows)
cqlsh> select * from bla.test where bla = 'nãoç';
Traceback (most recent call last):
  File "C:\Users\Paulo\Repositories\cassandra\bin\\cqlsh.py", line 1044, in 
get_input_line
self.lastcmd = raw_input(prompt).decode(self.encoding)
  File "C:\tools\python2\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0x87 in position 39: invalid 
start byte

WARNING: console codepage must be set to cp65001 to support utf-8 encoding on 
Windows platforms.
If you experience encoding problems, change your console codepage with 'chcp 
65001' before starting cqlsh.

cqlsh> exit;
C:\Users\Paulo\Repositories\cassandra [cassandra-2.2 +8 ~1 -0 !]> chcp 65001
Active code page: 65001
C:\Users\Paulo\Repositories\cassandra [cassandra-2.2 +8 ~1 -0 !]> bin\cqlsh.bat 
--encoding utf-8
Connected to test at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 2.2.4-SNAPSHOT | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
cqlsh> select * from bla.test;

 bla
--
 joão álcides
  bla
 nãoç

(3 rows)
cqlsh> select * from bla.test where bla = 'nãoç';

 bla
--
 nãoç

(1 rows)
cqlsh> insert into 

[jira] [Resolved] (CASSANDRA-11025) Too many compactions on certain node when too many empty tables are created

2016-01-19 Thread Carl Yeksigian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Yeksigian resolved CASSANDRA-11025.

   Resolution: Duplicate
Fix Version/s: 2.0.17

This appears to be a dupe of CASSANDRA-9662, which was fixed in 2.0.17.

Note that 2.0 is EOL, so no new patches or releases will be made. If you try 
with 2.1 and find the same issue, please reopen.

> Too many compactions on certain node when too many empty tables are created
> ---
>
> Key: CASSANDRA-11025
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11025
> Project: Cassandra
>  Issue Type: Bug
>  Components: Compaction
> Environment: 4 nodes with 24 core cpu and 32G ram, CentOS 6.5. Each 
> node is configured with 8G heap size.
>Reporter: Shuo Chen
> Fix For: 2.0.17
>
>
> I have configured 4 nodes of cassandra cluster version 2.0.16. Each node has 
> about 10G load. One of the node has too many pending compactions not too long 
> and too many  full gc after it restarts.
> Here is part of gc histogram:
>  num #instances #bytes  class name
> --
>1:  67758530 2168272960  java.util.concurrent.FutureTask
>2:  67759745 1626233880  
> java.util.concurrent.Executors$RunnableAdapter
>3:  67758576 1626205824  
> java.util.concurrent.LinkedBlockingQueue$Node
>4:  67758529 1626204696  
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask
>5: 16935   72995576  [B
>6:240534   11545632  java.nio.HeapByteBuffer
>7: 374165969800  [C
>8: 414475624856  
>9: 414475315504  
>   10:1048505032800  
> edu.stanford.ppl.concurrent.SnapTreeMap$Node
>   11:  41104564144  
>   12:1047813352992  org.apache.cassandra.db.Column
>   13:  41102824016  
> Here is the nodetool stats:
> [cassandra@whaty181 apache-cassandra-2.0.16]$ bin/nodetool compactionstats
> pending tasks: 64642341
> Active compaction remaining time :n/a
> However in system.log, there are not too much log of compaction. I used 
> inotify to monitor the data directory events. There are few events when 
> pending tasks accumulate.
> I used jmap to dump the heap and analyzed the 
> java.util.concurrent.FutureTask. It contains many task of CompactionExecutor. 
> I checked the cf of CompactionExecutor. However most of the cfs are never 
> inserted any data since created.
> I have created 6 keyspaces and 5 keyspaces never inserted any data. In these 
> 5 keyspaces, 2 keyspaces contain 42 cfs each and 3 keyspaces contains 6 cfs 
> in total. So there are about 90 empty  cfs.
> All of these cfs are super column families created using casandra-cli
> After I dropped these 5 keyspaces and restarted this node, the compaction 
> status of this node is normal again. So I suggests there is many some bug 
> concerning compaction on empty columnfamily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10563) Integrate new upgrade test into dtest upgrade suite

2016-01-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106955#comment-15106955
 ] 

Jim Witschey commented on CASSANDRA-10563:
--

[~slebresne] Would it be possible for you to rebase your new commits onto my 
changes and push them to the {{riptano/pcmanus_review-8099_upgrade_tests}} 
branch? I pushed it as a {{riptano}} branch so we could work together on it.

And, sorry I only mentioned this on GitHub last week and not on Jira, but could 
you also document the new tests with docstrings and {{@jira_ticket}} 
annotations where appropriate?

> Integrate new upgrade test into dtest upgrade suite
> ---
>
> Key: CASSANDRA-10563
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10563
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Testing
>Reporter: Jim Witschey
>Assignee: Jim Witschey
>Priority: Critical
> Fix For: 3.0.x
>
>
> This is a follow-up ticket for CASSANDRA-10360, specifically [~slebresne]'s 
> comment here:
> https://issues.apache.org/jira/browse/CASSANDRA-10360?focusedCommentId=14966539&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14966539
> These tests should be incorporated into the [{{upgrade_tests}} in 
> dtest|https://github.com/riptano/cassandra-dtest/tree/master/upgrade_tests]. 
> I'll take this on; [~nutbunnies] is also a good person for it, but I'll 
> likely get to it first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9752) incremental repair dtest flaps on 2.2

2016-01-19 Thread Philip Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106942#comment-15106942
 ] 

Philip Thompson commented on CASSANDRA-9752:


I would be comfortable, but only if we're sure that the OOM is acceptable? Do 
we have an accurate accounting of how much RAM these need with vnodes, and if 
we're okay with that amount?

> incremental repair dtest flaps on 2.2 
> --
>
> Key: CASSANDRA-9752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>
> {{incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test}} 
> flaps on 2.2. It's hard to tell what failures are repair-specific, but there 
> are a few distinct failures I've seen recently:
> - [an NPE in 
> StorageService|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/143/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> - [an NPE in 
> SSTableRewriter|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/135/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/].
>  I believe this is related to CASSANDRA-9730, but someone should confirm this.
> - [an on-disk data size that is too 
> large|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/133/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> You can find the test itself [here on 
> GitHub|https://github.com/riptano/cassandra-dtest/blob/master/incremental_repair_test.py#L206]
>  and run it with the command
> {code}
> CASSANDRA_VERSION=git:trunk nosetests 
> incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test
> {code}
> Assigning [~yukim], since you're the repair person, but feel free to reassign 
> to whoever's appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9752) incremental repair dtest flaps on 2.2

2016-01-19 Thread Jim Witschey (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106918#comment-15106918
 ] 

Jim Witschey commented on CASSANDRA-9752:
-

I'm guessing that we don't want to give the nodes more memory for a single 
test; [~mshuler] can you confirm?

[~philipthompson] are you comfortable with running this with {{no_vnodes}} 
only? I'm a little nervous about it since the the two test modes tend to expose 
errors in unexpected ways.

> incremental repair dtest flaps on 2.2 
> --
>
> Key: CASSANDRA-9752
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9752
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Jim Witschey
>
> {{incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test}} 
> flaps on 2.2. It's hard to tell what failures are repair-specific, but there 
> are a few distinct failures I've seen recently:
> - [an NPE in 
> StorageService|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/143/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> - [an NPE in 
> SSTableRewriter|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/135/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/].
>  I believe this is related to CASSANDRA-9730, but someone should confirm this.
> - [an on-disk data size that is too 
> large|http://cassci.datastax.com/view/cassandra-2.2/job/cassandra-2.2_dtest/133/testReport/junit/incremental_repair_test/TestIncRepair/multiple_subsequent_repair_test/]
> You can find the test itself [here on 
> GitHub|https://github.com/riptano/cassandra-dtest/blob/master/incremental_repair_test.py#L206]
>  and run it with the command
> {code}
> CASSANDRA_VERSION=git:trunk nosetests 
> incremental_repair_test.py:TestIncRepair.multiple_subsequent_repair_test
> {code}
> Assigning [~yukim], since you're the repair person, but feel free to reassign 
> to whoever's appropriate.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-7409) Allow multiple overlapping sstables in L1

2016-01-19 Thread Carl Yeksigian (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106866#comment-15106866
 ] 

Carl Yeksigian commented on CASSANDRA-7409:
---

I've been working on this on and off; here is the latest 
[branch|https://github.com/carlyeks/cassandra/commits/ticket/7409]. I think 
some of the changes recently will improve the same issues here: CASSANDRA-6696 
can have more simultaneous compactions (limit is # of disks), and 
CASSANDRA-10540 will further improve that (limit is # of ranges).

I still think this has merits, but in order to instrument this, I've focused on 
adding the additional logging support in CASSANDRA-10805, which has been useful 
in figuring out what exactly is going on with these compactions. I still 
haven't been able to find the cause of the poor performance in the L0 selection 
when MOLO = 0.

> Allow multiple overlapping sstables in L1
> -
>
> Key: CASSANDRA-7409
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7409
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Compaction
>Reporter: Carl Yeksigian
>Assignee: Carl Yeksigian
>  Labels: compaction
> Fix For: 3.x
>
>
> Currently, when a normal L0 compaction takes place (not STCS), we take up to 
> MAX_COMPACTING_L0 L0 sstables and all of the overlapping L1 sstables and 
> compact them together. If we didn't have to deal with the overlapping L1 
> tables, we could compact a higher number of L0 sstables together into a set 
> of non-overlapping L1 sstables.
> This could be done by delaying the invariant that L1 has no overlapping 
> sstables. Going from L1 to L2, we would be compacting fewer sstables together 
> which overlap.
> When reading, we will not have the same one sstable per level (except L0) 
> guarantee, but this can be bounded (once we have too many sets of sstables, 
> either compact them back into the same level, or compact them up to the next 
> level).
> This could be generalized to allow any level to be the maximum for this 
> overlapping strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10991) Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106873#comment-15106873
 ] 

Marcus Eriksson commented on CASSANDRA-10991:
-

[~mlowicki] ok, then the patch fixes your issue - but it is not really a 
problem currently either, just a bit ugly error message.

> Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet
> --
>
> Key: CASSANDRA-10991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10991
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Marcus Eriksson
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> I've C* cluster spread across 3 DCs. Running {{cleanup}} on all nodes in one 
> DC always fails:
> {code}
> root@db1:~# nt cleanup system
> root@db1:~# nt cleanup sync
> root@db1:~# nt cleanup OpsCenter
> Aborted cleaning up atleast one column family in keyspace OpsCenter, check 
> server logs for more information.
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:292)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:204)
> root@db1:~# 
> {code}
> Checked two other DCs and running cleanup there works fine (it didn't fail 
> immediately).
> Output from {{nodetool status}} from one node in problematic DC:
> {code}
> root@db1:~# nt status
> Datacenter: Amsterdam
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.3.162   518.54 GB  256 ?   
> 50e606f5-e893-4a3b-86d3-1e5986dceea9  RAC1
> UN  10.210.3.230   532.63 GB  256 ?   
> 7b8fc988-8a6a-4d94-ae84-ab9da9ab01e8  RAC1
> UN  10.210.3.161   538.82 GB  256 ?   
> d44b0f6d-7933-4a7c-ba7b-f8648e038f85  RAC1
> UN  10.210.3.160   497.6 GB   256 ?   
> e7332179-a47e-471d-bcd4-08c638ab9ea4  RAC1
> UN  10.210.3.224   334.25 GB  256 ?   
> 92b0bd8c-0a5a-446a-83ea-2feea4988fe3  RAC1
> UN  10.210.3.118   518.34 GB  256 ?   
> ebddeaf3-1433-4372-a4ca-9c7ba3d4a26b  RAC1
> UN  10.210.3.221   516.57 GB  256 ?   
> 44d67a49-5310-4ab5-b448-a44be350abf5  RAC1
> UN  10.210.3.117   493.83 GB  256 ?   
> aae92956-82d6-421e-8f3f-22393ac7e5f7  RAC1
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.59.124  392.83 GB  320 ?   
> f770a8cc-b7bf-44ac-8cc0-214d9228dfcd  RAC1
> UN  10.210.59.151  411.9 GB   320 ?   
> 3cc87422-0e43-4cd1-91bf-484f121be072  RAC1
> UN  10.210.58.132  309.8 GB   256 ?   
> 84d94d13-28d3-4b49-a3d9-557ab47e79b9  RAC1
> UN  10.210.58.133  281.82 GB  256 ?   
> 02bd2d02-41c5-4193-81b0-dee434adb0da  RAC1
> UN  10.210.59.86   285.84 GB  256 ?   
> bc6422ea-22e9-431a-ac16-c4c040f0c4e5  RAC1
> UN  10.210.59.84   331.06 GB  256 ?   
> a798e6b0-3a84-4ec2-82bb-8474086cb315  RAC1
> UN  10.210.59.85   366.26 GB  256 ?   
> 52699077-56cf-4c1e-b308-bf79a1644b7e  RAC1
> Datacenter: Ashburn
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.195.15.176  534.51 GB  256 ?   
> c6ac22df-c43a-4b25-b3b5-5e12ce9c69da  RAC1
> UN  10.195.15.177  313.73 GB  256 ?   
> eafa2a72-84a2-4cdc-a634-3c660acc6af8  RAC1
> UN  10.195.15.163  470.92 GB  256 ?   
> bcd2a534-94c4-4406-8d16-c1fc26b41844  RAC1
> UN  10.195.15.162  539.82 GB  256 ?   
> bb649cef-21de-4077-a35f-994319011a06  RAC1
> UN  10.195.15.182  499.64 GB  256 ?   
> 6ce2d14d-9fb8-4494-8e97-3add05bd35de  RAC1
> UN  10.195.15.167  508.48 GB  256 ?   
> 6f359675-852a-4842-9ff2-bdc69e6b04a2  RAC1
> UN  10.195.15.166  490.28 GB  256 ?   
> 1ec5d0c5-e8bd-4973-96d9-523de91d08c5  RAC1
> UN  10.195.15.183  447.78 GB  256 ?   
> 824165b0-1f1b-40e8-9695-e2f596cb8611  RAC1
> Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> {code}
> Logs from one of the nodes where {{cleanup}} fails:
> {code}
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-01-09 15:58:33,942 
> CompactionManager.java:388 - Cleanup cannot run before a node has joined the 
> ring
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-01-09 15:58:33,970 
> CompactionManager.java:388 - Cleanup cannot run before a node has jo

[jira] [Comment Edited] (CASSANDRA-11024) Unexpected exception during request; java.lang.StackOverflowError: null

2016-01-19 Thread Kai Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106870#comment-15106870
 ] 

Kai Wang edited comment on CASSANDRA-11024 at 1/19/16 3:28 PM:
---

Yeah, I double checked. This is how the log looks like.

{noformat}
ERROR [SharedPool-Worker-2] 2016-01-15 20:49:07,999 Message.java:611 - 
Unexpected exception during request; channel = [id: 0x727ba949, 
/192.168.0.3:50333 => /192.168.0.12:9042]
java.lang.StackOverflowError: null
at 
com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
 ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
 ~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
~[guava-16.0.jar:na]
...
... (repeat hasNext line for ~1000 times)
...
at 
com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
WARN  [SharedPool-Worker-3] 2016-01-15 20:49:08,001 SliceQueryFilter.java:307 - 
Read 55 live and 68967 tombstone cells in ks.cf for key: pk. (see 
tombstone_warn_threshold). 197 columns were requested, slices=[-]
...
{noformat}


was (Author: depend):
Yes, the log is like this:

{noformat}
ERROR [SharedPool-Worker-2] 2016-01-15 20:49:07,999 Message.java:611 - 
Unexpected exception during request; channel = [id: 0x727ba949, 
/192.168.0.3:50333 => /192.168.0.12:9042]
java.lang.StackOverflowError: null
at 
com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
 ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
 ~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
~[guava-16.0.jar:na]
...
... (repeat hasNext line for ~1000 times)
...
at 
com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
WARN  [SharedPool-Worker-3] 2016-01-15 20:49:08,001 SliceQueryFilter.java:307 - 
Read 55 live and 68967 tombstone cells in ks.cf for key: pk. (see 
tombstone_warn_threshold). 197 columns were requested, slices=[-]
...
{noformat}

> Unexpected exception during request; java.lang.StackOverflowError: null
> ---
>
> Key: CASSANDRA-

[jira] [Commented] (CASSANDRA-10991) Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet

2016-01-19 Thread mlowicki (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106872#comment-15106872
 ] 

mlowicki commented on CASSANDRA-10991:
--

{code}
cqlsh> desc keyspace "OpsCenter";

CREATE KEYSPACE "OpsCenter" WITH replication = {'class': 
'NetworkTopologyStrategy', 'Amsterdam': '1', 'Ashburn': '1'}  AND 
durable_writes = true;

CREATE TABLE "OpsCenter".events_timeline (
key text,
column1 bigint,
value blob,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = '{"info": "OpsCenter management data.", "version": [5, 2, 1]}'
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.25
AND speculative_retry = 'NONE';

CREATE TABLE "OpsCenter".settings (
key blob,
column1 blob,
value blob,
PRIMARY KEY (key, column1)
) WITH COMPACT STORAGE
AND CLUSTERING ORDER BY (column1 ASC)
AND bloom_filter_fp_chance = 0.01
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = '{"info": "OpsCenter management data.", "version": [5, 2, 1]}'
AND compaction = {'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy'}
AND compression = {'sstable_compression': 
'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.0
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 1.0
AND speculative_retry = 'NONE';

...
{code}

Ah I see that "Analytics" is missing in {{replication}}.

> Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet
> --
>
> Key: CASSANDRA-10991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10991
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Marcus Eriksson
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> I've C* cluster spread across 3 DCs. Running {{cleanup}} on all nodes in one 
> DC always fails:
> {code}
> root@db1:~# nt cleanup system
> root@db1:~# nt cleanup sync
> root@db1:~# nt cleanup OpsCenter
> Aborted cleaning up atleast one column family in keyspace OpsCenter, check 
> server logs for more information.
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:292)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:204)
> root@db1:~# 
> {code}
> Checked two other DCs and running cleanup there works fine (it didn't fail 
> immediately).
> Output from {{nodetool status}} from one node in problematic DC:
> {code}
> root@db1:~# nt status
> Datacenter: Amsterdam
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.3.162   518.54 GB  256 ?   
> 50e606f5-e893-4a3b-86d3-1e5986dceea9  RAC1
> UN  10.210.3.230   532.63 GB  256 ?   
> 7b8fc988-8a6a-4d94-ae84-ab9da9ab01e8  RAC1
> UN  10.210.3.161   538.82 GB  256 ?   
> d44b0f6d-7933-4a7c-ba7b-f8648e038f85  RAC1
> UN  10.210.3.160   497.6 GB   256 ?   
> e7332179-a47e-471d-bcd4-08c638ab9ea4  RAC1
> UN  10.210.3.224   334.25 GB  256 ?   
> 92b0bd8c-0a5a-446a-83ea-2feea4988fe3  RAC1
> UN  10.210.3.118   518.34 GB  256 ?   
> ebddeaf3-1433-4372-a4ca-9c7ba3d4a26b  RAC1
> UN  10.210.3.221   516.57 GB  256 ?   
> 44d67a49-5310-4ab5-b448-a44be350abf5  RAC1
> UN  10.210.3.117   493.83 GB  256 ?   
> aae92956-82d6-421e-8f3f-22393ac7e5f7  RAC1
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.59.124  392.83 GB  320 ?   
> f770a8cc-b7bf-44ac-8cc0-214d9228dfcd  RAC1
> UN  10.210.59.151  411.9 GB   320 ?   
> 3cc87422-0e43-4cd1-91bf-484f121be072  RAC1
> UN  10.210.58.132  309.8 GB   256 ?   
> 84d94d13-28d3-4b49-a3d9-557ab47e79b9  RAC1
> UN  10.21

[jira] [Commented] (CASSANDRA-11004) LWT results '[applied]' column name collision

2016-01-19 Thread Adam Holmberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106871#comment-15106871
 ] 

Adam Holmberg commented on CASSANDRA-11004:
---

I see your point. Nobody ever said the names would be unique. It's a deficiency 
in the Python driver that will not be addressed in the current row factories 
that return {{dict}} or {{namedtuple}} for rows. cqlsh can be updated to use a 
different row factory to keep the names from colliding.

> LWT results '[applied]' column name collision
> -
>
> Key: CASSANDRA-11004
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11004
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Adam Holmberg
>Priority: Minor
> Fix For: 4.x
>
>
> LWT requests return a not-well-documented single row result with a boolean 
> {{\[applied]}} column and optional column states.
> If the table happens to have a column named {{\[applied]}}, this causes a 
> name collision. There is no error, but the {{\[applied]}} flag is not 
> available.
> {code}
> cassandra@cqlsh:test> CREATE TABLE test (k int PRIMARY KEY , "[applied]" int);
> cassandra@cqlsh:test> INSERT INTO test (k, "[applied]") VALUES (2, 3) IF NOT 
> EXISTS ;
>  [applied]
> ---
>   True
> cassandra@cqlsh:test> INSERT INTO test (k, "[applied]") VALUES (2, 3) IF NOT 
> EXISTS ;
>  [applied] | k
> ---+---
>  3 | 2
> {code}
> I doubt this comes up much (at all) in practice, but thought I'd mention it. 
> One alternative approach might be to add a LWT result type 
> ([flag|https://github.com/apache/cassandra/blob/cassandra-3.0/doc/native_protocol_v4.spec#L518-L522])
>  that segregates the "applied" flag information optional row results.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11024) Unexpected exception during request; java.lang.StackOverflowError: null

2016-01-19 Thread Kai Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106870#comment-15106870
 ] 

Kai Wang commented on CASSANDRA-11024:
--

Yes, the log is like this:

{noformat}
ERROR [SharedPool-Worker-2] 2016-01-15 20:49:07,999 Message.java:611 - 
Unexpected exception during request; channel = [id: 0x727ba949, 
/192.168.0.3:50333 => /192.168.0.12:9042]
java.lang.StackOverflowError: null
at 
com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
 ~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
 ~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
~[guava-16.0.jar:na]
at 
com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48) 
~[guava-16.0.jar:na]
at com.google.common.collect.Iterators$5.hasNext(Iterators.java:543) 
~[guava-16.0.jar:na]
...
... (repeat hasNext line for ~1000 times)
...
at 
com.google.common.collect.Iterators$5.hasNext(Iterators.java:542) 
~[guava-16.0.jar:na]
WARN  [SharedPool-Worker-3] 2016-01-15 20:49:08,001 SliceQueryFilter.java:307 - 
Read 55 live and 68967 tombstone cells in ks.cf for key: pk. (see 
tombstone_warn_threshold). 197 columns were requested, slices=[-]
...
{noformat}

> Unexpected exception during request; java.lang.StackOverflowError: null
> ---
>
> Key: CASSANDRA-11024
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11024
> Project: Cassandra
>  Issue Type: Bug
> Environment: CentOS 7, Java x64 1.8.0_65
>Reporter: Kai Wang
>Priority: Minor
>
> This happened when I run a "SELECT *" query on a very wide table. The table 
> has over 1000 columns and a lot of nulls. If I run "SELECT * ... LIMIT 10" or 
> "SELECT a,b,c FROM ...", then it's fine. The data is being actively inserted 
> when I run the query. Will try later when compaction (LCS) catches up.
> {noformat}
> ERROR [SharedPool-Worker-5] 2016-01-15 20:49:08,212 Message.java:611 - 
> Unexpected exception during request; channel = [id: 0x8e11d570, 
> /192.168.0.3:50332 => /192.168.0.11:9042]
> java.lang.StackOverflowError: null
>   at 
> com.google.common.base.Preconditions.checkPositionIndex(Preconditions.java:339)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.AbstractIndexedListIterator.(AbstractIndexedListIterator.java:69)
>  ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators$11.(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterators.forArray(Iterators.java:1048) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.RegularImmutableList.listIterator(RegularImmutableList.java:106)
>  ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.listIterator(ImmutableList.java:344) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:340) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.ImmutableList.iterator(ImmutableList.java:61) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.iterators(Iterables.java:504) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables.access$100(Iterables.java:60) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$2.iterator(Iterables.java:494) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:508) 
> ~[guava-16.0.jar:na]
>   at com.google.common.collect.Iterables$3.transform(Iterables.java:505) 
> ~[guava-16.0.jar:na]
>   at 
> com.google.common.collect.TransformedIterator.next(TransformedIterator.java:48)
>  ~[guava

[jira] [Updated] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-6018:

Fix Version/s: (was: 3.x)
   3.4

> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.4
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106847#comment-15106847
 ] 

Jason Brown commented on CASSANDRA-6018:


Thanks for the thorough reviewing :)

Committed to trunk; sha is 7374e9b5ab08c1f1e612bf72293ea14c959b0c3c

> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.x
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[1/2] cassandra git commit: Encrypted commit logs

2016-01-19 Thread jasobrown
Repository: cassandra
Updated Branches:
  refs/heads/trunk 7226ac9e6 -> 7374e9b5a


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7374e9b5/src/java/org/apache/cassandra/utils/ByteBufferUtil.java
--
diff --git a/src/java/org/apache/cassandra/utils/ByteBufferUtil.java 
b/src/java/org/apache/cassandra/utils/ByteBufferUtil.java
index 6bcec96..4712dff 100644
--- a/src/java/org/apache/cassandra/utils/ByteBufferUtil.java
+++ b/src/java/org/apache/cassandra/utils/ByteBufferUtil.java
@@ -35,8 +35,8 @@ import java.util.UUID;
 import net.nicoulaj.compilecommand.annotations.Inline;
 import org.apache.cassandra.db.TypeSizes;
 import org.apache.cassandra.io.util.DataInputPlus;
+import org.apache.cassandra.io.compress.BufferType;
 import org.apache.cassandra.io.util.DataOutputPlus;
-import org.apache.cassandra.io.util.FileDataInput;
 import org.apache.cassandra.io.util.FileUtils;
 
 /**
@@ -626,4 +626,47 @@ public class ByteBufferUtil
 return readBytes(bb, length);
 }
 
+/**
+ * Ensure {@code buf} is large enough for {@code outputLength}. If not, it 
is cleaned up and a new buffer is allocated;
+ * else; buffer has it's position/limit set appropriately.
+ *
+ * @param buf buffer to test the size of; may be null, in which case, a 
new buffer is allocated.
+ * @param outputLength the minimum target size of the buffer
+ * @param allowBufferResize true if resizing (reallocating) the buffer is 
allowed
+ * @return {@code buf} if it was large enough, else a newly allocated 
buffer.
+ */
+public static ByteBuffer ensureCapacity(ByteBuffer buf, int outputLength, 
boolean allowBufferResize)
+{
+BufferType bufferType = buf != null ? BufferType.typeOf(buf) : 
BufferType.ON_HEAP;
+return ensureCapacity(buf, outputLength, allowBufferResize, 
bufferType);
+}
+
+/**
+ * Ensure {@code buf} is large enough for {@code outputLength}. If not, it 
is cleaned up and a new buffer is allocated;
+ * else; buffer has it's position/limit set appropriately.
+ *
+ * @param buf buffer to test the size of; may be null, in which case, a 
new buffer is allocated.
+ * @param outputLength the minimum target size of the buffer
+ * @param allowBufferResize true if resizing (reallocating) the buffer is 
allowed
+ * @param bufferType on- or off- heap byte buffer
+ * @return {@code buf} if it was large enough, else a newly allocated 
buffer.
+ */
+public static ByteBuffer ensureCapacity(ByteBuffer buf, int outputLength, 
boolean allowBufferResize, BufferType bufferType)
+{
+if (0 > outputLength)
+throw new IllegalArgumentException("invalid size for output 
buffer: " + outputLength);
+if (buf == null || buf.capacity() < outputLength)
+{
+if (!allowBufferResize)
+throw new IllegalStateException(String.format("output buffer 
is not large enough for data: current capacity %d, required %d", 
buf.capacity(), outputLength));
+FileUtils.clean(buf);
+buf = bufferType.allocate(outputLength);
+}
+else
+{
+buf.position(0).limit(outputLength);
+}
+return buf;
+}
+
 }

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7374e9b5/test/data/legacy-commitlog/3.4-encrypted/CommitLog-6-1452918948163.log
--
diff --git 
a/test/data/legacy-commitlog/3.4-encrypted/CommitLog-6-1452918948163.log 
b/test/data/legacy-commitlog/3.4-encrypted/CommitLog-6-1452918948163.log
new file mode 100644
index 000..3be1fcf
Binary files /dev/null and 
b/test/data/legacy-commitlog/3.4-encrypted/CommitLog-6-1452918948163.log differ

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7374e9b5/test/data/legacy-commitlog/3.4-encrypted/hash.txt
--
diff --git a/test/data/legacy-commitlog/3.4-encrypted/hash.txt 
b/test/data/legacy-commitlog/3.4-encrypted/hash.txt
new file mode 100644
index 000..d4cca55
--- /dev/null
+++ b/test/data/legacy-commitlog/3.4-encrypted/hash.txt
@@ -0,0 +1,5 @@
+#CommitLog upgrade test, version 3.4-SNAPSHOT
+#Fri Jan 15 20:35:53 PST 2016
+cells=8777
+hash=-542543236
+cfid=9debf690-bc0a-11e5-9ac3-9fafc76bc377

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7374e9b5/test/long/org/apache/cassandra/db/commitlog/CommitLogStressTest.java
--
diff --git 
a/test/long/org/apache/cassandra/db/commitlog/CommitLogStressTest.java 
b/test/long/org/apache/cassandra/db/commitlog/CommitLogStressTest.java
index be3abb4..e6f9499 100644
--- a/test/long/org/apache/cassandra/db/commitlog/CommitLogStressTest.java
+++ b/test/long/org/apache/cassandra/db/commitlog/CommitLogStressTest.java
@@ -37,10 +37,9 @@ import java.util.concurrent.ThreadLocalRandom

[2/2] cassandra git commit: Encrypted commit logs

2016-01-19 Thread jasobrown
Encrypted commit logs

patch by jasobrown; reviewed by blambov for (CASSANDRA-6018)


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7374e9b5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7374e9b5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7374e9b5

Branch: refs/heads/trunk
Commit: 7374e9b5ab08c1f1e612bf72293ea14c959b0c3c
Parents: 7226ac9
Author: Jason Brown 
Authored: Tue Sep 1 09:24:50 2015 -0700
Committer: Jason Brown 
Committed: Tue Jan 19 07:00:32 2016 -0800

--
 conf/cassandra.yaml |  31 ++
 .../cassandra/db/commitlog/CommitLog.java   |   3 +
 .../db/commitlog/CommitLogArchiver.java |   2 +-
 .../db/commitlog/CommitLogDescriptor.java   |  64 +++-
 .../db/commitlog/CommitLogReplayer.java | 171 +++--
 .../db/commitlog/CommitLogSegment.java  |  49 ++-
 .../db/commitlog/CommitLogSegmentManager.java   |   2 +-
 .../db/commitlog/CompressedSegment.java |  72 +---
 .../EncryptedFileSegmentInputStream.java|  73 
 .../db/commitlog/EncryptedSegment.java  | 161 +
 .../db/commitlog/FileDirectSegment.java | 102 ++
 .../db/commitlog/MemoryMappedSegment.java   |   1 -
 .../cassandra/db/commitlog/SegmentReader.java   | 355 +++
 .../org/apache/cassandra/io/util/FileUtils.java |   2 +
 .../cassandra/security/EncryptionContext.java   |  62 +++-
 .../cassandra/security/EncryptionUtils.java | 277 +++
 .../apache/cassandra/utils/ByteBufferUtil.java  |  45 ++-
 .../3.4-encrypted/CommitLog-6-1452918948163.log | Bin 0 -> 872373 bytes
 .../legacy-commitlog/3.4-encrypted/hash.txt |   5 +
 .../db/commitlog/CommitLogStressTest.java   | 113 +++---
 .../db/commitlog/CommitLogDescriptorTest.java   | 311 
 .../cassandra/db/commitlog/CommitLogTest.java   | 342 +-
 .../db/commitlog/CommitLogUpgradeTest.java  |  15 +-
 .../db/commitlog/CommitLogUpgradeTestMaker.java |   6 +-
 .../db/commitlog/SegmentReaderTest.java | 147 
 .../security/EncryptionContextGenerator.java|   7 +-
 .../cassandra/security/EncryptionUtilsTest.java | 116 ++
 27 files changed, 2169 insertions(+), 365 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7374e9b5/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 779575c..e29a6d3 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -939,3 +939,34 @@ enable_scripted_user_defined_functions: false
 # below their system default. The sysinternals 'clockres' tool can confirm 
your system's default
 # setting.
 windows_timer_interval: 1
+
+
+# Enables encrypting data at-rest (on disk). Currently, AES/CBC/PKCS5Padding 
is the only supported
+# encyption algorithm. Different key providers can be plugged in, but the 
default reads from
+# a JCE-style keystore. A single keystore can hold multiple keys, but the one 
referenced by
+# the "key_alias" is the only key that will be used for encrypt opertaions; 
previously used keys
+# can still (and should!) be in the keystore and will be used on decrypt 
operations
+# (to handle the case of key rotation).
+#
+# In order to make use of transparent data encryption, you must download and 
install the
+# Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy 
Files
+# for your version of the JDK.
+# (current link: 
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html)
+#
+# Currently, only the following file types are supported for transparent data 
encryption, although
+# more are coming in future cassandra releases: commitlog
+transparent_data_encryption_options:
+enabled: false
+chunk_length_kb: 64
+cipher: AES/CBC/PKCS5Padding
+key_alias: testing:1
+# CBC requires iv length to be 16 bytes
+# iv_length: 16
+key_provider: 
+  - class_name: org.apache.cassandra.security.JKSKeyProvider
+parameters: 
+  - keystore: test/conf/cassandra.keystore
+keystore_password: cassandra
+store_type: JCEKS
+key_password: cassandra
+

http://git-wip-us.apache.org/repos/asf/cassandra/blob/7374e9b5/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
--
diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java 
b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
index 64e22e0..0c6a6cb 100644
--- a/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
+++ b/src/java/org/apache/cassandra/db/commitlog/CommitLog.java
@@ -44,6 +44,7 @@ import 
org.apache.cassandra.io.util.BufferedDataOutputStreamPlus;

[jira] [Updated] (CASSANDRA-11022) Use SHA hashing to store password in the credentials cache

2016-01-19 Thread Mike Adamson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Adamson updated CASSANDRA-11022:
-
Fix Version/s: 3.4

> Use SHA hashing to store password in the credentials cache
> --
>
> Key: CASSANDRA-11022
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11022
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Mike Adamson
> Fix For: 3.4
>
>
> In CASSANDRA-7715 a credentials cache has been added to the 
> {{PasswordAuthenticator}} to improve performance when multiple 
> authentications occur for the same user. 
> Unfortunately, the bcrypt hash is being cached which is one of the major 
> performance overheads in password authentication. 
> I propose that the cache is changed to use a SHA- hash to store the user 
> password. As long as the cache is cleared for the user on an unsuccessful 
> authentication this won't significantly increase the ability of an attacker 
> to use a brute force attack because every other attempt will use bcrypt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11014) Repair fails with "not enough bytes"

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106797#comment-15106797
 ] 

Marcus Eriksson commented on CASSANDRA-11014:
-

Is it possible for you to identify the sstable that is broken? I think there 
should be a 'Scrubbing ' log message before the error in scrub-output.txt. 
Could you attach it if so? 

> Repair fails with "not enough bytes"
> 
>
> Key: CASSANDRA-11014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11014
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3 node cluster, debian jessie, cassandra 2.2.4
>Reporter: Christian Schjolberg
>Priority: Blocker
> Attachments: scrub-output.txt
>
>
> After upgrading to 2.2.4, nodetool repair fails every time with the error 
> message "Not enough bytes". It appears no data is being repaired at all. 
> Here's some output:
> -@cas01:~$ nodetool repair
> [2016-01-14 12:00:16,590] Starting repair command #1, repairing keyspace 
> adsquare with repair options (parallelism: parallel, primary range: false, 
> incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], 
> hosts: [], # of ranges: 768)
> [2016-01-14 12:00:21,935] Repair session 61174f80-bab6-11e5-9fa9-11175757c857 
> for range (-3942884673882176939,-3929110923969659376] failed with error 
> [repair #61174f80-bab6-11e5-9fa9-11175757c857 on adsquare/device_lookup, 
> (-3942884673882176939,-3929110923969659376]] Validation failed in 
> /10.10.100.61 (progress: 0%)
> The system.log on the host in question shows 
> ERROR [ValidationExecutor:2] 2016-01-14 09:58:19,935 CassandraDaemon.java:185 
> - Exception in thread Thread[ValidationExecutor:2,1,main]
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:169)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.update(LazilyCompactedRow.java:172)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.repair.Validator.rowHash(Validator.java:194) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.repair.Validator.add(Validator.java:143) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1118)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.access$700(CompactionManager.java:73)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$10.call(CompactionManager.java:671)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) 

[jira] [Comment Edited] (CASSANDRA-11014) Repair fails with "not enough bytes"

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106797#comment-15106797
 ] 

Marcus Eriksson edited comment on CASSANDRA-11014 at 1/19/16 2:36 PM:
--

Is it possible for you to identify the sstable that is broken? I think there 
should be a 'Scrubbing ' log message before the error in scrub-output.txt. 
Could you attach it if so? We would also need the schema


was (Author: krummas):
Is it possible for you to identify the sstable that is broken? I think there 
should be a 'Scrubbing ' log message before the error in scrub-output.txt. 
Could you attach it if so? 

> Repair fails with "not enough bytes"
> 
>
> Key: CASSANDRA-11014
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11014
> Project: Cassandra
>  Issue Type: Bug
> Environment: 3 node cluster, debian jessie, cassandra 2.2.4
>Reporter: Christian Schjolberg
>Priority: Blocker
> Attachments: scrub-output.txt
>
>
> After upgrading to 2.2.4, nodetool repair fails every time with the error 
> message "Not enough bytes". It appears no data is being repaired at all. 
> Here's some output:
> -@cas01:~$ nodetool repair
> [2016-01-14 12:00:16,590] Starting repair command #1, repairing keyspace 
> adsquare with repair options (parallelism: parallel, primary range: false, 
> incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], 
> hosts: [], # of ranges: 768)
> [2016-01-14 12:00:21,935] Repair session 61174f80-bab6-11e5-9fa9-11175757c857 
> for range (-3942884673882176939,-3929110923969659376] failed with error 
> [repair #61174f80-bab6-11e5-9fa9-11175757c857 on adsquare/device_lookup, 
> (-3942884673882176939,-3929110923969659376]] Validation failed in 
> /10.10.100.61 (progress: 0%)
> The system.log on the host in question shows 
> ERROR [ValidationExecutor:2] 2016-01-14 09:58:19,935 CassandraDaemon.java:185 
> - Exception in thread Thread[ValidationExecutor:2,1,main]
> java.lang.IllegalArgumentException: Not enough bytes
> at 
> org.apache.cassandra.db.composites.AbstractCType.checkRemaining(AbstractCType.java:362)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCompoundCellNameType.fromByteBuffer(AbstractCompoundCellNameType.java:98)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:381)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(AbstractCType.java:365)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:75)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:52) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.AbstractCell$1.computeNext(AbstractCell.java:46) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.hasNext(SSTableIdentityIterator.java:169)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:202)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:645) 
> ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>  ~[guava-16.0.jar:na]
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138) 
> ~[guava-16.0.jar:na]
> at 
> org.apache.cassandra.db.compaction.LazilyCompactedRow.update(LazilyCompactedRow.java:172)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.repair.Validator.rowHash(Validator.java:194) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at org.apache.cassandra.repair.Validator.add(Validator.java:143) 
> ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.doValidationCompaction(CompactionManager.java:1118)
>  ~[apache-cassandra-2.2.4.jar:2.2.4]
> at 
> org.apache.cassandra.db.compaction.CompactionManager.acc

svn commit: r1725527 - /cassandra/site/publish/download/index.html

2016-01-19 Thread jake
Author: jake
Date: Tue Jan 19 14:23:49 2016
New Revision: 1725527

URL: http://svn.apache.org/viewvc?rev=1725527&view=rev
Log:
rm extra )

Modified:
cassandra/site/publish/download/index.html

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1725527&r1=1725526&r2=1725527&view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Tue Jan 19 14:23:49 2016
@@ -68,7 +68,7 @@

   Apache Cassandra 3.0.x is supported until May 2017.
   The latest release is 3.0.2, 
-  released on 2015-12-21).
+  released on 2015-12-21.
 
 





[jira] [Commented] (CASSANDRA-10991) Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106787#comment-15106787
 ] 

Marcus Eriksson commented on CASSANDRA-10991:
-

[~mlowicki] could you post 'describe OpsCenter'? Is it possible you have RF=0 
in the DC where it fails like this?

> Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet
> --
>
> Key: CASSANDRA-10991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10991
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
> Fix For: 2.1.12
>
>
> I've C* cluster spread across 3 DCs. Running {{cleanup}} on all nodes in one 
> DC always fails:
> {code}
> root@db1:~# nt cleanup system
> root@db1:~# nt cleanup sync
> root@db1:~# nt cleanup OpsCenter
> Aborted cleaning up atleast one column family in keyspace OpsCenter, check 
> server logs for more information.
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:292)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:204)
> root@db1:~# 
> {code}
> Checked two other DCs and running cleanup there works fine (it didn't fail 
> immediately).
> Output from {{nodetool status}} from one node in problematic DC:
> {code}
> root@db1:~# nt status
> Datacenter: Amsterdam
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.3.162   518.54 GB  256 ?   
> 50e606f5-e893-4a3b-86d3-1e5986dceea9  RAC1
> UN  10.210.3.230   532.63 GB  256 ?   
> 7b8fc988-8a6a-4d94-ae84-ab9da9ab01e8  RAC1
> UN  10.210.3.161   538.82 GB  256 ?   
> d44b0f6d-7933-4a7c-ba7b-f8648e038f85  RAC1
> UN  10.210.3.160   497.6 GB   256 ?   
> e7332179-a47e-471d-bcd4-08c638ab9ea4  RAC1
> UN  10.210.3.224   334.25 GB  256 ?   
> 92b0bd8c-0a5a-446a-83ea-2feea4988fe3  RAC1
> UN  10.210.3.118   518.34 GB  256 ?   
> ebddeaf3-1433-4372-a4ca-9c7ba3d4a26b  RAC1
> UN  10.210.3.221   516.57 GB  256 ?   
> 44d67a49-5310-4ab5-b448-a44be350abf5  RAC1
> UN  10.210.3.117   493.83 GB  256 ?   
> aae92956-82d6-421e-8f3f-22393ac7e5f7  RAC1
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.59.124  392.83 GB  320 ?   
> f770a8cc-b7bf-44ac-8cc0-214d9228dfcd  RAC1
> UN  10.210.59.151  411.9 GB   320 ?   
> 3cc87422-0e43-4cd1-91bf-484f121be072  RAC1
> UN  10.210.58.132  309.8 GB   256 ?   
> 84d94d13-28d3-4b49-a3d9-557ab47e79b9  RAC1
> UN  10.210.58.133  281.82 GB  256 ?   
> 02bd2d02-41c5-4193-81b0-dee434adb0da  RAC1
> UN  10.210.59.86   285.84 GB  256 ?   
> bc6422ea-22e9-431a-ac16-c4c040f0c4e5  RAC1
> UN  10.210.59.84   331.06 GB  256 ?   
> a798e6b0-3a84-4ec2-82bb-8474086cb315  RAC1
> UN  10.210.59.85   366.26 GB  256 ?   
> 52699077-56cf-4c1e-b308-bf79a1644b7e  RAC1
> Datacenter: Ashburn
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.195.15.176  534.51 GB  256 ?   
> c6ac22df-c43a-4b25-b3b5-5e12ce9c69da  RAC1
> UN  10.195.15.177  313.73 GB  256 ?   
> eafa2a72-84a2-4cdc-a634-3c660acc6af8  RAC1
> UN  10.195.15.163  470.92 GB  256 ?   
> bcd2a534-94c4-4406-8d16-c1fc26b41844  RAC1
> UN  10.195.15.162  539.82 GB  256 ?   
> bb649cef-21de-4077-a35f-994319011a06  RAC1
> UN  10.195.15.182  499.64 GB  256 ?   
> 6ce2d14d-9fb8-4494-8e97-3add05bd35de  RAC1
> UN  10.195.15.167  508.48 GB  256 ?   
> 6f359675-852a-4842-9ff2-bdc69e6b04a2  RAC1
> UN  10.195.15.166  490.28 GB  256 ?   
> 1ec5d0c5-e8bd-4973-96d9-523de91d08c5  RAC1
> UN  10.195.15.183  447.78 GB  256 ?   
> 824165b0-1f1b-40e8-9695-e2f596cb8611  RAC1
> Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> {code}
> Logs from one of the nodes where {{cleanup}} fails:
> {code}
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-01-09 15:58:33,942 
> CompactionManager.java:388 - Cleanup cannot run before a node has joined the 
> ring
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-01-09 15:58:33,970 
> CompactionManager.java:388 - Cleanup cannot run before a node has joined the 
> ring
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-0

[jira] [Assigned] (CASSANDRA-10991) Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet

2016-01-19 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson reassigned CASSANDRA-10991:
---

Assignee: Marcus Eriksson

> Cleanup OpsCenter keyspace fails - node thinks that didn't joined the ring yet
> --
>
> Key: CASSANDRA-10991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10991
> Project: Cassandra
>  Issue Type: Bug
> Environment: C* 2.1.12, Debian Wheezy
>Reporter: mlowicki
>Assignee: Marcus Eriksson
> Fix For: 2.1.12
>
>
> I've C* cluster spread across 3 DCs. Running {{cleanup}} on all nodes in one 
> DC always fails:
> {code}
> root@db1:~# nt cleanup system
> root@db1:~# nt cleanup sync
> root@db1:~# nt cleanup OpsCenter
> Aborted cleaning up atleast one column family in keyspace OpsCenter, check 
> server logs for more information.
> error: nodetool failed, check server logs
> -- StackTrace --
> java.lang.RuntimeException: nodetool failed, check server logs
> at 
> org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:292)
> at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:204)
> root@db1:~# 
> {code}
> Checked two other DCs and running cleanup there works fine (it didn't fail 
> immediately).
> Output from {{nodetool status}} from one node in problematic DC:
> {code}
> root@db1:~# nt status
> Datacenter: Amsterdam
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.3.162   518.54 GB  256 ?   
> 50e606f5-e893-4a3b-86d3-1e5986dceea9  RAC1
> UN  10.210.3.230   532.63 GB  256 ?   
> 7b8fc988-8a6a-4d94-ae84-ab9da9ab01e8  RAC1
> UN  10.210.3.161   538.82 GB  256 ?   
> d44b0f6d-7933-4a7c-ba7b-f8648e038f85  RAC1
> UN  10.210.3.160   497.6 GB   256 ?   
> e7332179-a47e-471d-bcd4-08c638ab9ea4  RAC1
> UN  10.210.3.224   334.25 GB  256 ?   
> 92b0bd8c-0a5a-446a-83ea-2feea4988fe3  RAC1
> UN  10.210.3.118   518.34 GB  256 ?   
> ebddeaf3-1433-4372-a4ca-9c7ba3d4a26b  RAC1
> UN  10.210.3.221   516.57 GB  256 ?   
> 44d67a49-5310-4ab5-b448-a44be350abf5  RAC1
> UN  10.210.3.117   493.83 GB  256 ?   
> aae92956-82d6-421e-8f3f-22393ac7e5f7  RAC1
> Datacenter: Analytics
> =
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.210.59.124  392.83 GB  320 ?   
> f770a8cc-b7bf-44ac-8cc0-214d9228dfcd  RAC1
> UN  10.210.59.151  411.9 GB   320 ?   
> 3cc87422-0e43-4cd1-91bf-484f121be072  RAC1
> UN  10.210.58.132  309.8 GB   256 ?   
> 84d94d13-28d3-4b49-a3d9-557ab47e79b9  RAC1
> UN  10.210.58.133  281.82 GB  256 ?   
> 02bd2d02-41c5-4193-81b0-dee434adb0da  RAC1
> UN  10.210.59.86   285.84 GB  256 ?   
> bc6422ea-22e9-431a-ac16-c4c040f0c4e5  RAC1
> UN  10.210.59.84   331.06 GB  256 ?   
> a798e6b0-3a84-4ec2-82bb-8474086cb315  RAC1
> UN  10.210.59.85   366.26 GB  256 ?   
> 52699077-56cf-4c1e-b308-bf79a1644b7e  RAC1
> Datacenter: Ashburn
> ===
> Status=Up/Down
> |/ State=Normal/Leaving/Joining/Moving
> --  AddressLoad   Tokens  OwnsHost ID 
>   Rack
> UN  10.195.15.176  534.51 GB  256 ?   
> c6ac22df-c43a-4b25-b3b5-5e12ce9c69da  RAC1
> UN  10.195.15.177  313.73 GB  256 ?   
> eafa2a72-84a2-4cdc-a634-3c660acc6af8  RAC1
> UN  10.195.15.163  470.92 GB  256 ?   
> bcd2a534-94c4-4406-8d16-c1fc26b41844  RAC1
> UN  10.195.15.162  539.82 GB  256 ?   
> bb649cef-21de-4077-a35f-994319011a06  RAC1
> UN  10.195.15.182  499.64 GB  256 ?   
> 6ce2d14d-9fb8-4494-8e97-3add05bd35de  RAC1
> UN  10.195.15.167  508.48 GB  256 ?   
> 6f359675-852a-4842-9ff2-bdc69e6b04a2  RAC1
> UN  10.195.15.166  490.28 GB  256 ?   
> 1ec5d0c5-e8bd-4973-96d9-523de91d08c5  RAC1
> UN  10.195.15.183  447.78 GB  256 ?   
> 824165b0-1f1b-40e8-9695-e2f596cb8611  RAC1
> Note: Non-system keyspaces don't have the same replication settings, 
> effective ownership information is meaningless
> {code}
> Logs from one of the nodes where {{cleanup}} fails:
> {code}
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-01-09 15:58:33,942 
> CompactionManager.java:388 - Cleanup cannot run before a node has joined the 
> ring
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-01-09 15:58:33,970 
> CompactionManager.java:388 - Cleanup cannot run before a node has joined the 
> ring
> INFO  [RMI TCP Connection(158004)-10.210.59.86] 2016-01-09 15:58:34,000 
> CompactionManager.java:388 - Cleanup cannot run before a node has joined the

svn commit: r1725526 - /cassandra/site/src/content/download/index.html

2016-01-19 Thread jake
Author: jake
Date: Tue Jan 19 14:23:27 2016
New Revision: 1725526

URL: http://svn.apache.org/viewvc?rev=1725526&view=rev
Log:
rm extra )

Modified:
cassandra/site/src/content/download/index.html

Modified: cassandra/site/src/content/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/content/download/index.html?rev=1725526&r1=1725525&r2=1725526&view=diff
==
--- cassandra/site/src/content/download/index.html (original)
+++ cassandra/site/src/content/download/index.html Tue Jan 19 14:23:27 2016
@@ -25,7 +25,7 @@

   Apache Cassandra 3.0.x is supported until May 2017.
   The latest release is {{ cassandra_stable }}, 
-  released on {{ cassandra_stable_release_date }}).
+  released on {{ cassandra_stable_release_date }}.
 
 





svn commit: r1725525 - in /cassandra/site: publish/download/index.html src/content/download/index.html

2016-01-19 Thread jake
Author: jake
Date: Tue Jan 19 14:21:45 2016
New Revision: 1725525

URL: http://svn.apache.org/viewvc?rev=1725525&view=rev
Log:
examples

Modified:
cassandra/site/publish/download/index.html
cassandra/site/src/content/download/index.html

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1725525&r1=1725524&r2=1725525&view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Tue Jan 19 14:21:45 2016
@@ -47,7 +47,7 @@
 
 Tick-Tock Cassandra Server Releases
 
-Cassandra is moving to a monthly release process called Tick-Tock.  
Even-numbered releases contain new features; odd-numbered ones contain bug 
fixes only.  If a critical bug is found, a patch will be released against the 
most recent bug fix release.  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/";>Read 
more about tick-tock here.
+Cassandra is moving to a monthly release process called Tick-Tock.  
Even-numbered releases (e.g. 3.2) contain new features; odd-numbered ones (e.g. 
3.3) contain bug fixes only.  If a critical bug is found, a patch will be 
released against the most recent bug fix release.  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/";>Read 
more about tick-tock here.
 
 The latest tick-tock release is 3.2.1, released on
 2016-01-18.

Modified: cassandra/site/src/content/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/content/download/index.html?rev=1725525&r1=1725524&r2=1725525&view=diff
==
--- cassandra/site/src/content/download/index.html (original)
+++ cassandra/site/src/content/download/index.html Tue Jan 19 14:21:45 2016
@@ -4,7 +4,7 @@
 
 Tick-Tock Cassandra Server Releases
 
-Cassandra is moving to a monthly release process called Tick-Tock.  
Even-numbered releases contain new features; odd-numbered ones contain bug 
fixes only.  If a critical bug is found, a patch will be released against the 
most recent bug fix release.  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/";>Read 
more about tick-tock here.
+Cassandra is moving to a monthly release process called Tick-Tock.  
Even-numbered releases (e.g. 3.2) contain new features; odd-numbered ones (e.g. 
3.3) contain bug fixes only.  If a critical bug is found, a patch will be 
released against the most recent bug fix release.  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/";>Read 
more about tick-tock here.
 
 The latest tick-tock release is {{ cassandra_ticktock }}, released on
 {{ cassandra_ticktock_release_date }}.




svn commit: r1725524 - in /cassandra/site: publish/download/index.html publish/index.html src/settings.py

2016-01-19 Thread jake
Author: jake
Date: Tue Jan 19 14:18:13 2016
New Revision: 1725524

URL: http://svn.apache.org/viewvc?rev=1725524&view=rev
Log:
3.2.1

Modified:
cassandra/site/publish/download/index.html
cassandra/site/publish/index.html
cassandra/site/src/settings.py

Modified: cassandra/site/publish/download/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/download/index.html?rev=1725524&r1=1725523&r2=1725524&view=diff
==
--- cassandra/site/publish/download/index.html (original)
+++ cassandra/site/publish/download/index.html Tue Jan 19 14:18:13 2016
@@ -49,16 +49,16 @@
 
 Cassandra is moving to a monthly release process called Tick-Tock.  
Even-numbered releases contain new features; odd-numbered ones contain bug 
fixes only.  If a critical bug is found, a patch will be released against the 
most recent bug fix release.  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/";>Read 
more about tick-tock here.
 
-The latest tick-tock release is 3.2, released on
-2016-01-11.
+The latest tick-tock release is 3.2.1, released on
+2016-01-18.
 
 
 
   
-  http://www.apache.org/dyn/closer.lua/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz";>apache-cassandra-3.2-bin.tar.gz
-  [http://www.apache.org/dist/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz.asc";>PGP]
-  [http://www.apache.org/dist/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz.md5";>MD5]
-  [http://www.apache.org/dist/cassandra/3.2/apache-cassandra-3.2-bin.tar.gz.sha1";>SHA1]
+  http://www.apache.org/dyn/closer.lua/cassandra/3.2.1/apache-cassandra-3.2.1-bin.tar.gz";>apache-cassandra-3.2.1-bin.tar.gz
+  [http://www.apache.org/dist/cassandra/3.2.1/apache-cassandra-3.2.1-bin.tar.gz.asc";>PGP]
+  [http://www.apache.org/dist/cassandra/3.2.1/apache-cassandra-3.2.1-bin.tar.gz.md5";>MD5]
+  [http://www.apache.org/dist/cassandra/3.2.1/apache-cassandra-3.2.1-bin.tar.gz.sha1";>SHA1]
   
 
 

Modified: cassandra/site/publish/index.html
URL: 
http://svn.apache.org/viewvc/cassandra/site/publish/index.html?rev=1725524&r1=1725523&r2=1725524&view=diff
==
--- cassandra/site/publish/index.html (original)
+++ cassandra/site/publish/index.html Tue Jan 19 14:18:13 2016
@@ -77,7 +77,7 @@
   
   
   
-  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/";>Tick-Tock
 release 3.2 (http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.2";>Changes)
+  http://www.planetcassandra.org/blog/cassandra-2-2-3-0-and-beyond/";>Tick-Tock
 release 3.2.1 (http://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/cassandra-3.2.1";>Changes)
   
   
 

Modified: cassandra/site/src/settings.py
URL: 
http://svn.apache.org/viewvc/cassandra/site/src/settings.py?rev=1725524&r1=1725523&r2=1725524&view=diff
==
--- cassandra/site/src/settings.py (original)
+++ cassandra/site/src/settings.py Tue Jan 19 14:18:13 2016
@@ -92,8 +92,8 @@ SITE_POST_PROCESSORS = {
 }
 
 class CassandraDef(object):
-ticktock_version = '3.2'
-ticktock_version_date = '2016-01-11'
+ticktock_version = '3.2.1'
+ticktock_version_date = '2016-01-18'
 stable_version = '3.0.2'
 stable_release_date = '2015-12-21'
 oldstable_version = '2.2.4'




[jira] [Commented] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-19 Thread Branimir Lambov (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106776#comment-15106776
 ] 

Branimir Lambov commented on CASSANDRA-6018:


LGTM. Thanks for the patience.

> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.x
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-6018) Add option to encrypt commitlog

2016-01-19 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106764#comment-15106764
 ] 

Jason Brown commented on CASSANDRA-6018:


Addressed latest comments 
[here|https://github.com/jasobrown/cassandra/commit/7022957df2ff7470dc9c48ca1331c705bfed36e9]

- added new field to SegmentReader.SyncSection, named toleratesErrorsInSection, 
which should resolve the {{toleratesTruncation}} concern.
- fixed OR-clause when catching SegmentReaderException
- renamed {{SegmentReader.toleratesErrors}} to {{toleratesTruncation}} to 
better reflect it's derivation from the variable in 
{{CommitLogReplayer.recover}}


> Add option to encrypt commitlog 
> 
>
> Key: CASSANDRA-6018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-6018
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: Jason Brown
>Assignee: Jason Brown
>  Labels: commit_log, encryption, security
> Fix For: 3.x
>
>
> We are going to start using cassandra for a billing system, and while I can 
> encrypt sstables at rest (via Datastax Enterprise), commit logs are more or 
> less plain text. Thus, an attacker would be able to easily read, for example, 
> credit card numbers in the clear text commit log (if the calling app does not 
> encrypt the data itself before sending it to cassandra).
> I want to allow the option of encrypting the commit logs, most likely 
> controlled by a property in the yaml.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (CASSANDRA-10842) compaction_throughput_tests are failing

2016-01-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne reassigned CASSANDRA-10842:


Assignee: Sylvain Lebresne

> compaction_throughput_tests are failing
> ---
>
> Key: CASSANDRA-10842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10842
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Compaction, Testing
>Reporter: Philip Thompson
>Assignee: Sylvain Lebresne
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> {{compaction_test.TestCompaction_with_DateTieredCompactionStrategy.compaction_throughput_test}}
>  and 
> {{compaction_test.TestCompaction_with_LeveledCompactionStrategy.compaction_throughput_test}}
>  are failing on 3.0-head, 2.1-head, and 2.2-head for the last two builds. 
> See: http://cassci.datastax.com/job/cassandra-3.0_dtest/429/testReport/
> The test sets compaction throughput to 5, via nodetool, but is finding an 
> average throughput greater than 5 in the logs. I cannot reproduce this 
> locally on OSX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10842) compaction_throughput_tests are failing

2016-01-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106750#comment-15106750
 ] 

Sylvain Lebresne commented on CASSANDRA-10842:
--

Looking at the last few failures, it seems the throughput we get is higher but 
pretty close to the expected one (for instance {{5.092345}} instead of 
{{5.0}}). I don't think this is really a problem, especially since, as far as I 
can tell, the throughput during compaction is done on the input files, while 
the value computed at the end of compaction (that the test use) is based on the 
output files, and so I don't think we can expect things to be too exact. I've 
pushed a trivial pull request at 
https://github.com/riptano/cassandra-dtest/pull/752 to round the avg throughput 
computed post-compaction. It's probably not perfect but as far as I can tell 
from the failure history, that should be enough to make the test pass reliably.

> compaction_throughput_tests are failing
> ---
>
> Key: CASSANDRA-10842
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10842
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Compaction, Testing
>Reporter: Philip Thompson
> Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x
>
>
> {{compaction_test.TestCompaction_with_DateTieredCompactionStrategy.compaction_throughput_test}}
>  and 
> {{compaction_test.TestCompaction_with_LeveledCompactionStrategy.compaction_throughput_test}}
>  are failing on 3.0-head, 2.1-head, and 2.2-head for the last two builds. 
> See: http://cassci.datastax.com/job/cassandra-3.0_dtest/429/testReport/
> The test sets compaction throughput to 5, via nodetool, but is finding an 
> average throughput greater than 5 in the logs. I cannot reproduce this 
> locally on OSX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10383) Disable auto snapshot on selected tables.

2016-01-19 Thread Robert Stupp (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Stupp updated CASSANDRA-10383:
-
Fix Version/s: 4.x

> Disable auto snapshot on selected tables.
> -
>
> Key: CASSANDRA-10383
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10383
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>  Labels: doc-impacting, messaging-service-bump-required
> Fix For: 4.x
>
> Attachments: 10383.txt
>
>
> I have a use case where I would like to turn off auto snapshot for selected 
> tables, I don't want to turn it off completely since its a good feature. 
> Looking at the code I think it would be relatively easy to fix.
> My plan is to create a new table property named something like 
> "disable_auto_snapshot". If set to false it will prevent auto snapshot on the 
> table, if set to true auto snapshot will be controlled by the "auto_snapshot" 
> property in the cassandra.yaml. Default would be true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (CASSANDRA-10392) Allow Cassandra to trace to custom tracing implementations

2016-01-19 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106669#comment-15106669
 ] 

mck edited comment on CASSANDRA-10392 at 1/19/16 12:45 PM:
---

added barebones dtest. working on "verifying the message passing makes it 
across multiple nodes".


was (Author: michaelsembwever):
added barebones dtest

> Allow Cassandra to trace to custom tracing implementations 
> ---
>
> Key: CASSANDRA-10392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10392
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Assignee: mck
> Fix For: 3.x
>
> Attachments: 10392-trunk.txt, cassandra-dtest_master-10392.txt
>
>
> It can be possible to use an external tracing solution in Cassandra by 
> abstracting out the writing of tracing to system_traces tables in the tracing 
> package to separate implementation classes and leaving abstract classes in 
> place that define the interface and behaviour otherwise of C* tracing.
> Then via a system property "cassandra.custom_tracing_class" the Tracing class 
> implementation could be swapped out with something third party.
> An example of this is adding Zipkin tracing into Cassandra in the Summit 
> [presentation|http://thelastpickle.com/files/2015-09-24-using-zipkin-for-full-stack-tracing-including-cassandra/presentation/tlp-reveal.js/tlp-cassandra-zipkin.html].
>  Code for the implemented Zipkin plugin can be found at 
> https://github.com/thelastpickle/cassandra-zipkin-tracing/
> In addition this patch passes the custom payload through into the tracing 
> session allowing a third party tracing solution like Zipkin to do full-stack 
> tracing from clients through and into Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10392) Allow Cassandra to trace to custom tracing implementations

2016-01-19 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-10392:

Attachment: cassandra-dtest_master-10392.txt

added barebones dtest

> Allow Cassandra to trace to custom tracing implementations 
> ---
>
> Key: CASSANDRA-10392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10392
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Assignee: mck
> Fix For: 3.x
>
> Attachments: 10392-trunk.txt, cassandra-dtest_master-10392.txt
>
>
> It can be possible to use an external tracing solution in Cassandra by 
> abstracting out the writing of tracing to system_traces tables in the tracing 
> package to separate implementation classes and leaving abstract classes in 
> place that define the interface and behaviour otherwise of C* tracing.
> Then via a system property "cassandra.custom_tracing_class" the Tracing class 
> implementation could be swapped out with something third party.
> An example of this is adding Zipkin tracing into Cassandra in the Summit 
> [presentation|http://thelastpickle.com/files/2015-09-24-using-zipkin-for-full-stack-tracing-including-cassandra/presentation/tlp-reveal.js/tlp-cassandra-zipkin.html].
>  Code for the implemented Zipkin plugin can be found at 
> https://github.com/thelastpickle/cassandra-zipkin-tracing/
> In addition this patch passes the custom payload through into the tracing 
> session allowing a third party tracing solution like Zipkin to do full-stack 
> tracing from clients through and into Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9646) Duplicated schema change event for table creation

2016-01-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106619#comment-15106619
 ] 

Sylvain Lebresne commented on CASSANDRA-9646:
-

[~Stefania] could you write some test equivalent to the 
{{pushed_notifications_test.py:TestPushedNotifications.schema_changes_test}} 
test you added in the context of CASSANDRA-9961, but one that would run on 2.2 
(as that one uses MV) so we can check if this is still a thing in 2.2. I'll 
note that CASSANDRA-10932 suggests this isn't a problem at least in 3.0, so 
maybe this is now fixed, or maybe the schema changes in 3.0 happened to fix 
that.

> Duplicated schema change event for table creation
> -
>
> Key: CASSANDRA-9646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9646
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: OSX 10.9
>Reporter: Jorge Bay
>Assignee: Stefania
>Priority: Minor
>  Labels: client-impacting
> Fix For: 2.2.x
>
>
> When I create a table (or a function), I'm getting notifications for 2 
> changes:
> - Target:"KEYSPACE" and type: "UPDATED"
> - Target: "TABLE" AND type "CREATED".
> I think the first one should not be there. This only occurs with C* 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-9646) Duplicated schema change event for table creation

2016-01-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-9646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-9646:

Assignee: Stefania

> Duplicated schema change event for table creation
> -
>
> Key: CASSANDRA-9646
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9646
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: OSX 10.9
>Reporter: Jorge Bay
>Assignee: Stefania
>Priority: Minor
>  Labels: client-impacting
> Fix For: 2.2.x
>
>
> When I create a table (or a function), I'm getting notifications for 2 
> changes:
> - Target:"KEYSPACE" and type: "UPDATED"
> - Target: "TABLE" AND type "CREATED".
> I think the first one should not be there. This only occurs with C* 2.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (CASSANDRA-10932) pushed_notifications_test.py schema_changes_test is failing

2016-01-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne resolved CASSANDRA-10932.
--
   Resolution: Fixed
 Assignee: Sylvain Lebresne
Fix Version/s: (was: 3.0.x)
   (was: 3.x)
   3.3
   3.0.3

The test is clearly bogus, and we do want the 8 notifications that are sent, so 
I modified the test and committed (207a1e495a745f525b923c8276fa049e59bf1777 in 
the dtests).

As to why the test was expecting 14 notifications, I'm not sure, but this could 
be related to CASSANDRA-9646. It appears on some version we did sent duplicate 
notification and the test might have been written when that was the case. And I 
also didn't found a ticket related to fixing this but it's apparently fixed at 
least on 3.0 (that particular test doesn't run into 2.2). The CI history also 
seems to suggest the test has never ran successfully there so it's hard to 
pinpoint a particular fix.

Anyway, the test is fixed so closing and we can verify whether those duplicate 
notifications still happen in 2.2 in CASSANDRA-9646.

> pushed_notifications_test.py schema_changes_test is failing
> ---
>
> Key: CASSANDRA-10932
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10932
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Philip Thompson
>Assignee: Sylvain Lebresne
> Fix For: 3.0.3, 3.3
>
>
> {{pushed_notifications_test.py:TestPushedNotifications.schema_changes_test}} 
> is failing on HEAD of cassandra-3.0. It may be simply a problem with the test 
> assertions, so someone just needs to double check if the schema change 
> notifications pushed to the driver are correct.
> In actuality, the driver gets 8 notifications, listed in the debug output of 
> the test failure:
> {code}
> ==
> FAIL: schema_changes_test (pushed_notifications_test.TestPushedNotifications)
> --
> Traceback (most recent call last):
>   File "/Users/philipthompson/cstar/cassandra-dtest/tools.py", line 253, in 
> wrapped
> f(obj)
>   File 
> "/Users/philipthompson/cstar/cassandra-dtest/pushed_notifications_test.py", 
> line 244, in schema_changes_test
> self.assertEquals(14, len(notifications))
> AssertionError: 14 != 8
>  >> begin captured logging << 
> dtest: DEBUG: cluster ccm directory: 
> /var/folders/v3/z4wf_34n1q506_xjdy49gb78gn/T/dtest-93xMe2
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'CREATED', 'target_type': uCorrect typo in MV creation query
> 'KEYSPACE'}
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'CREATED', 'target_type': u'TABLE', u'table': u't'}
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'UPDATED', 'target_type': u'TABLE', u'table': u't'}
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'CREATED', 'target_type': u'TABLE', u'table': u'mv'}
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'UPDATED', 'target_type': u'TABLE', u'table': u'mv'}
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'DROPPED', 'target_type': u'TABLE', u'table': u'mv'}
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'DROPPED', 'target_type': u'TABLE', u'table': u't'}
> dtest: DEBUG: Source 127.0.0.2 sent {'keyspace': u'ks', 'change_type': 
> u'DROPPED', 'target_type': u'KEYSPACE'}
> dtest: DEBUG: Waiting for notifications from 127.0.0.2
> - >> end captured logging << -
> {code}
> The test has been expecting the following 14, though:
> {code}
> self.assertDictContainsSubset({'change_type': u'CREATED', 'target_type': 
> u'KEYSPACE'}, notifications[0])
> self.assertDictContainsSubset({'change_type': u'UPDATED', 
> 'target_type': u'KEYSPACE'}, notifications[1])
> self.assertDictContainsSubset({'change_type': u'CREATED', 
> 'target_type': u'TABLE', u'table': u't'}, notifications[2])
> self.assertDictContainsSubset({'change_type': u'UPDATED', 
> 'target_type': u'KEYSPACE'}, notifications[3])
> self.assertDictContainsSubset({'change_type': u'UPDATED', 
> 'target_type': u'TABLE', u'table': u't'}, notifications[4])
> self.assertDictContainsSubset({'change_type': u'UPDATED', 
> 'target_type': u'KEYSPACE'}, notifications[5])
> self.assertDictContainsSubset({'change_type': u'CREATED', 
> 'target_type': u'TABLE', u'table': u'mv'}, notifications[6])
> self.assertDictContainsSubset({'change_type': u'UPDATED', 
> 'target_type': u'KEYSPACE'}, notifications[7])
>  

[jira] [Commented] (CASSANDRA-10919) sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0

2016-01-19 Thread Sylvain Lebresne (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106571#comment-15106571
 ] 

Sylvain Lebresne commented on CASSANDRA-10919:
--

Btw, the test is not flapping, it's hard failing with the following error:
{noformat}
==
FAIL: abortedcompaction_test (sstableutil_test.SSTableUtilTest)
--
Traceback (most recent call last):
  File "/home/pcmanus/Git/cassandra-dtest/sstableutil_test.py", line 81, in 
abortedcompaction_test
finalfiles, tmpfiles = self._check_files(node, KeyspaceName, TableName, 
finalfiles)
  File "/home/pcmanus/Git/cassandra-dtest/sstableutil_test.py", line 140, in 
_check_files
self.assertEqual(expected_oplogs, oplogs)
AssertionError: Lists differ: [] != ['/tmp/dtest-dkELlT/test/node1...

Second list contains 3 additional elements.
First extra element 0:
/tmp/dtest-dkELlT/test/node1/data0/keyspace1/standard1-35624540be9711e59abf0dd672a44a0c/ma_txn_compaction_5cb96060-be97-11e5-9abf-0dd672a44a0c.log

- []
+ 
['/tmp/dtest-dkELlT/test/node1/data0/keyspace1/standard1-35624540be9711e59abf0dd672a44a0c/ma_txn_compaction_5cb96060-be97-11e5-9abf-0dd672a44a0c.log',
+  
'/tmp/dtest-dkELlT/test/node1/data2/keyspace1/standard1-35624540be9711e59abf0dd672a44a0c/ma_txn_compaction_5cb9ae81-be97-11e5-9abf-0dd672a44a0c.log',
+  
'/tmp/dtest-dkELlT/test/node1/data1/keyspace1/standard1-35624540be9711e59abf0dd672a44a0c/ma_txn_compaction_5cb9ae80-be97-11e5-9abf-0dd672a44a0c.log']
 >> begin captured logging << 
{noformat}
In other words, it gets some transaction log file when the test wasn't 
expecting some. I doubt this is terribly hard to fix, but I'm not that familiar 
with the new transaction log file so I'll let [~Stefania] have a look.

> sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0
> --
>
> Key: CASSANDRA-10919
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10919
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.0.x
>
>
> {{sstableutil_test.py:SSTableUtilTest.abortedcompaction_test}} flaps on 3.0:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/438/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/
> It also flaps on the CassCI job running without vnodes:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/110/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/history/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10919) sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0

2016-01-19 Thread Sylvain Lebresne (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-10919:
-
Assignee: Stefania

> sstableutil_test.py:SSTableUtilTest.abortedcompaction_test flapping on 3.0
> --
>
> Key: CASSANDRA-10919
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10919
> Project: Cassandra
>  Issue Type: Sub-task
>Reporter: Jim Witschey
>Assignee: Stefania
>  Labels: dtest
> Fix For: 3.0.x
>
>
> {{sstableutil_test.py:SSTableUtilTest.abortedcompaction_test}} flaps on 3.0:
> http://cassci.datastax.com/job/cassandra-3.0_dtest/438/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/
> It also flaps on the CassCI job running without vnodes:
> http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/110/testReport/junit/sstableutil_test/SSTableUtilTest/abortedcompaction_test/history/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CASSANDRA-10937) OOM on multiple nodes on write load (v. 3.0.0), problem also present on DSE-4.8.3, but there it survives more time

2016-01-19 Thread Peter Kovgan (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Kovgan updated CASSANDRA-10937:
-
Attachment: cassandra-to-jack-krupansky.docx

Attached answers to Jack Krupinsky (docx)

> OOM on multiple nodes on write load (v. 3.0.0), problem also present on 
> DSE-4.8.3, but there it survives more time
> --
>
> Key: CASSANDRA-10937
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10937
> Project: Cassandra
>  Issue Type: Bug
> Environment: Cassandra : 3.0.0
> Installed as open archive, no connection to any OS specific installer.
> Java:
> Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
> OS :
> Linux version 2.6.32-431.el6.x86_64 
> (mockbu...@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red 
> Hat 4.4.7-4) (GCC) ) #1 SMP Sun Nov 10 22:19:54 EST 2013
> We have:
> 8 guests ( Linux OS as above) on 2 (VMWare managed) physical hosts. Each 
> physical host keeps 4 guests.
> Physical host parameters(shared by all 4 guests):
> Model: HP ProLiant DL380 Gen9
> Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz
> 46 logical processors.
> Hyperthreading - enabled
> Each guest assigned to have:
> 1 disk 300 Gb for seq. log (NOT SSD)
> 1 disk 4T for data (NOT SSD)
> 11 CPU cores
> Disks are local, not shared.
> Memory on each host -  24 Gb total.
> 8 (or 6, tested both) Gb - cassandra heap
> (lshw and cpuinfo attached in file test2.rar)
>Reporter: Peter Kovgan
>Priority: Critical
> Attachments: cassandra-to-jack-krupansky.docx, gc-stat.txt, 
> more-logs.rar, some-heap-stats.rar, test2.rar, test3.rar, test4.rar, 
> test5.rar, test_2.1.rar, test_2.1_logs_older.rar, 
> test_2.1_restart_attempt_log.rar
>
>
> 8 cassandra nodes.
> Load test started with 4 clients(different and not equal machines), each 
> running 1000 threads.
> Each thread assigned in round-robin way to run one of 4 different inserts. 
> Consistency->ONE.
> I attach the full CQL schema of tables and the query of insert.
> Replication factor - 2:
> create keyspace OBLREPOSITORY_NY with replication = 
> {'class':'NetworkTopologyStrategy','NY':2};
> Initiall throughput is:
> 215.000  inserts /sec
> or
> 54Mb/sec, considering single insert size a bit larger than 256byte.
> Data:
> all fields(5-6) are short strings, except one is BLOB of 256 bytes.
> After about a 2-3 hours of work, I was forced to increase timeout from 2000 
> to 5000ms, for some requests failed for short timeout.
> Later on(after aprox. 12 hous of work) OOM happens on multiple nodes.
> (all failed nodes logs attached)
> I attach also java load client and instructions how set-up and use 
> it.(test2.rar)
> Update:
> Later on test repeated with lesser load (10 mes/sec) with more relaxed 
> CPU (idle 25%), with only 2 test clients, but anyway test failed.
> Update:
> DSE-4.8.3 also failed on OOM (3 nodes from 8), but here it survived 48 hours, 
> not 10-12.
> Attachments:
> test2.rar -contains most of material
> more-logs.rar - contains additional nodes logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9949) maxPurgeableTimestamp needs to check memtables too

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106541#comment-15106541
 ] 

Marcus Eriksson commented on CASSANDRA-9949:


I doubt it matters much which timestamp we pick there - this should happen very 
rarely and when it does, there is not much harm (we keep some tombstones around 
for a bit longer) so lets keep it consistent with the apply(Cell insert) 
functionality

The 2.2-patch LGTM

> maxPurgeableTimestamp needs to check memtables too
> --
>
> Key: CASSANDRA-9949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9949
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Jonathan Ellis
>Assignee: Stefania
> Fix For: 2.2.x, 3.0.x, 3.x
>
>
> overlapIterator/maxPurgeableTimestamp don't include the memtables, so a 
> very-out-of-order write could be ignored



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10392) Allow Cassandra to trace to custom tracing implementations

2016-01-19 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106538#comment-15106538
 ] 

mck commented on CASSANDRA-10392:
-

In the patch provided there's nothing in the design preventing this as a simple 
addition.

> Allow Cassandra to trace to custom tracing implementations 
> ---
>
> Key: CASSANDRA-10392
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10392
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: mck
>Assignee: mck
> Fix For: 3.x
>
> Attachments: 10392-trunk.txt
>
>
> It can be possible to use an external tracing solution in Cassandra by 
> abstracting out the writing of tracing to system_traces tables in the tracing 
> package to separate implementation classes and leaving abstract classes in 
> place that define the interface and behaviour otherwise of C* tracing.
> Then via a system property "cassandra.custom_tracing_class" the Tracing class 
> implementation could be swapped out with something third party.
> An example of this is adding Zipkin tracing into Cassandra in the Summit 
> [presentation|http://thelastpickle.com/files/2015-09-24-using-zipkin-for-full-stack-tracing-including-cassandra/presentation/tlp-reveal.js/tlp-cassandra-zipkin.html].
>  Code for the implemented Zipkin plugin can be found at 
> https://github.com/thelastpickle/cassandra-zipkin-tracing/
> In addition this patch passes the custom payload through into the tracing 
> session allowing a third party tracing solution like Zipkin to do full-stack 
> tracing from clients through and into Cassandra.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-19 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef32d629
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef32d629
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef32d629

Branch: refs/heads/cassandra-3.3
Commit: ef32d629d5f549394bab164ed04a44b76db1b4c2
Parents: 456581e c1a113a
Author: Sylvain Lebresne 
Authored: Tue Jan 19 11:02:22 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 19 11:02:22 2016 +0100

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/db/LegacyLayout.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef32d629/CHANGES.txt
--
diff --cc CHANGES.txt
index 4965920,04d0354..b70464f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,6 -1,6 +1,8 @@@
 -3.0.3
 +3.3
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Check the column name, not cell name, for dropped columns when reading
+legacy sstables (CASSANDRA-11018)
   * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
   * Remove checksum files after replaying hints (CASSANDRA-10947)
   * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef32d629/src/java/org/apache/cassandra/db/LegacyLayout.java
--



[1/6] cassandra git commit: Check the column name, not cell name, for dropped columns

2016-01-19 Thread slebresne
Repository: cassandra
Updated Branches:
  refs/heads/cassandra-3.0 c7cbde218 -> c1a113a9e
  refs/heads/cassandra-3.3 456581e54 -> ef32d629d
  refs/heads/trunk e13ea8db6 -> 7226ac9e6


Check the column name, not cell name, for dropped columns

patch by slebresne; reviewed by krummas for CASSANDRA-11018


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1a113a9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1a113a9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1a113a9

Branch: refs/heads/cassandra-3.0
Commit: c1a113a9e3381d5278ca2254b0d0b062cfa7551b
Parents: c7cbde2
Author: Sylvain Lebresne 
Authored: Mon Jan 18 16:02:06 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 19 11:01:32 2016 +0100

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/db/LegacyLayout.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1a113a9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9c0ab85..04d0354 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.3
+ * Check the column name, not cell name, for dropped columns when reading
+   legacy sstables (CASSANDRA-11018)
  * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
  * Remove checksum files after replaying hints (CASSANDRA-10947)
  * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1a113a9/src/java/org/apache/cassandra/db/LegacyLayout.java
--
diff --git a/src/java/org/apache/cassandra/db/LegacyLayout.java 
b/src/java/org/apache/cassandra/db/LegacyLayout.java
index 91b7755..e9e5169 100644
--- a/src/java/org/apache/cassandra/db/LegacyLayout.java
+++ b/src/java/org/apache/cassandra/db/LegacyLayout.java
@@ -961,7 +961,7 @@ public abstract class LegacyLayout
 // then simply ignore the cell is fine. But also not that we 
ignore if it's the
 // system keyspace because for those table we actually remove 
columns without registering
 // them in the dropped columns
-assert metadata.ksName.equals(SystemKeyspace.NAME) || 
metadata.getDroppedColumnDefinition(cellname) != null : e.getMessage();
+assert metadata.ksName.equals(SystemKeyspace.NAME) || 
metadata.getDroppedColumnDefinition(e.columnName) != null : e.getMessage();
 }
 }
 }



[3/6] cassandra git commit: Check the column name, not cell name, for dropped columns

2016-01-19 Thread slebresne
Check the column name, not cell name, for dropped columns

patch by slebresne; reviewed by krummas for CASSANDRA-11018


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1a113a9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1a113a9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1a113a9

Branch: refs/heads/trunk
Commit: c1a113a9e3381d5278ca2254b0d0b062cfa7551b
Parents: c7cbde2
Author: Sylvain Lebresne 
Authored: Mon Jan 18 16:02:06 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 19 11:01:32 2016 +0100

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/db/LegacyLayout.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1a113a9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9c0ab85..04d0354 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.3
+ * Check the column name, not cell name, for dropped columns when reading
+   legacy sstables (CASSANDRA-11018)
  * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
  * Remove checksum files after replaying hints (CASSANDRA-10947)
  * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1a113a9/src/java/org/apache/cassandra/db/LegacyLayout.java
--
diff --git a/src/java/org/apache/cassandra/db/LegacyLayout.java 
b/src/java/org/apache/cassandra/db/LegacyLayout.java
index 91b7755..e9e5169 100644
--- a/src/java/org/apache/cassandra/db/LegacyLayout.java
+++ b/src/java/org/apache/cassandra/db/LegacyLayout.java
@@ -961,7 +961,7 @@ public abstract class LegacyLayout
 // then simply ignore the cell is fine. But also not that we 
ignore if it's the
 // system keyspace because for those table we actually remove 
columns without registering
 // them in the dropped columns
-assert metadata.ksName.equals(SystemKeyspace.NAME) || 
metadata.getDroppedColumnDefinition(cellname) != null : e.getMessage();
+assert metadata.ksName.equals(SystemKeyspace.NAME) || 
metadata.getDroppedColumnDefinition(e.columnName) != null : e.getMessage();
 }
 }
 }



[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-19 Thread slebresne
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7226ac9e
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7226ac9e
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7226ac9e

Branch: refs/heads/trunk
Commit: 7226ac9e617900f7d336bffd46f115261282f327
Parents: e13ea8d ef32d62
Author: Sylvain Lebresne 
Authored: Tue Jan 19 11:02:33 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 19 11:02:33 2016 +0100

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/db/LegacyLayout.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/7226ac9e/CHANGES.txt
--



[2/6] cassandra git commit: Check the column name, not cell name, for dropped columns

2016-01-19 Thread slebresne
Check the column name, not cell name, for dropped columns

patch by slebresne; reviewed by krummas for CASSANDRA-11018


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c1a113a9
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c1a113a9
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c1a113a9

Branch: refs/heads/cassandra-3.3
Commit: c1a113a9e3381d5278ca2254b0d0b062cfa7551b
Parents: c7cbde2
Author: Sylvain Lebresne 
Authored: Mon Jan 18 16:02:06 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 19 11:01:32 2016 +0100

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/db/LegacyLayout.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1a113a9/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 9c0ab85..04d0354 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,6 @@
 3.0.3
+ * Check the column name, not cell name, for dropped columns when reading
+   legacy sstables (CASSANDRA-11018)
  * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
  * Remove checksum files after replaying hints (CASSANDRA-10947)
  * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c1a113a9/src/java/org/apache/cassandra/db/LegacyLayout.java
--
diff --git a/src/java/org/apache/cassandra/db/LegacyLayout.java 
b/src/java/org/apache/cassandra/db/LegacyLayout.java
index 91b7755..e9e5169 100644
--- a/src/java/org/apache/cassandra/db/LegacyLayout.java
+++ b/src/java/org/apache/cassandra/db/LegacyLayout.java
@@ -961,7 +961,7 @@ public abstract class LegacyLayout
 // then simply ignore the cell is fine. But also not that we 
ignore if it's the
 // system keyspace because for those table we actually remove 
columns without registering
 // them in the dropped columns
-assert metadata.ksName.equals(SystemKeyspace.NAME) || 
metadata.getDroppedColumnDefinition(cellname) != null : e.getMessage();
+assert metadata.ksName.equals(SystemKeyspace.NAME) || 
metadata.getDroppedColumnDefinition(e.columnName) != null : e.getMessage();
 }
 }
 }



[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-19 Thread slebresne
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ef32d629
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ef32d629
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ef32d629

Branch: refs/heads/trunk
Commit: ef32d629d5f549394bab164ed04a44b76db1b4c2
Parents: 456581e c1a113a
Author: Sylvain Lebresne 
Authored: Tue Jan 19 11:02:22 2016 +0100
Committer: Sylvain Lebresne 
Committed: Tue Jan 19 11:02:22 2016 +0100

--
 CHANGES.txt| 2 ++
 src/java/org/apache/cassandra/db/LegacyLayout.java | 2 +-
 2 files changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef32d629/CHANGES.txt
--
diff --cc CHANGES.txt
index 4965920,04d0354..b70464f
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,6 -1,6 +1,8 @@@
 -3.0.3
 +3.3
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Check the column name, not cell name, for dropped columns when reading
+legacy sstables (CASSANDRA-11018)
   * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
   * Remove checksum files after replaying hints (CASSANDRA-10947)
   * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/ef32d629/src/java/org/apache/cassandra/db/LegacyLayout.java
--



[jira] [Commented] (CASSANDRA-11021) Inserting static column fails with secondary index on clustering key

2016-01-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106505#comment-15106505
 ] 

Cédric Hernalsteens commented on CASSANDRA-11021:
-

Thanks a lot Sam, that's my best (but sole) bug tracking experience on a real 
project!

> Inserting static column fails with secondary index on clustering key
> 
>
> Key: CASSANDRA-11021
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11021
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.0.3, 3.3
>
>
> Creating a secondary index on a clustering key fails with an exception in 
> case a static column is involved.
> {code}
> CREATE TABLE test (k int, t int, s text static, v text, PRIMARY KEY (k, t));
> CREATE INDEX ix ON test (t);
> INSERT INTO test(k, t, s, v) VALUES (0, 1, 'abc', 'def');
> {code}
> {code}
> ERROR [SharedPool-Worker-2] 2016-01-15 11:42:27,484 StorageProxy.java:1336 - 
> Failed to apply mutation locally : {}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.cassandra.db.AbstractClusteringPrefix.get(AbstractClusteringPrefix.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.composites.ClusteringColumnIndex.getIndexedValue(ClusteringColumnIndex.java:58)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.getIndexedValue(CassandraIndex.java:598)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.insert(CassandraIndex.java:490)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.access$100(CassandraIndex.java:53)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$1.indexPrimaryKey(CassandraIndex.java:437)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$1.insertRow(CassandraIndex.java:347)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:795)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:275)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:154)
>  ~[main/:na]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:240) ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1145) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:494) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$$Lambda$166/492512700.run(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1330)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2480)
>  [main/:n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (CASSANDRA-11021) Inserting static column fails with secondary index on clustering key

2016-01-19 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cédric Hernalsteens updated CASSANDRA-11021:

Comment: was deleted

(was: I didn't mean to "ready to commit" or "in progress" and I don't know how 
to undo that :()

> Inserting static column fails with secondary index on clustering key
> 
>
> Key: CASSANDRA-11021
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11021
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.0.3, 3.3
>
>
> Creating a secondary index on a clustering key fails with an exception in 
> case a static column is involved.
> {code}
> CREATE TABLE test (k int, t int, s text static, v text, PRIMARY KEY (k, t));
> CREATE INDEX ix ON test (t);
> INSERT INTO test(k, t, s, v) VALUES (0, 1, 'abc', 'def');
> {code}
> {code}
> ERROR [SharedPool-Worker-2] 2016-01-15 11:42:27,484 StorageProxy.java:1336 - 
> Failed to apply mutation locally : {}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.cassandra.db.AbstractClusteringPrefix.get(AbstractClusteringPrefix.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.composites.ClusteringColumnIndex.getIndexedValue(ClusteringColumnIndex.java:58)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.getIndexedValue(CassandraIndex.java:598)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.insert(CassandraIndex.java:490)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.access$100(CassandraIndex.java:53)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$1.indexPrimaryKey(CassandraIndex.java:437)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$1.insertRow(CassandraIndex.java:347)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:795)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:275)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:154)
>  ~[main/:na]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:240) ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1145) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:494) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$$Lambda$166/492512700.run(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1330)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2480)
>  [main/:n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-11018) Drop column in results in corrupted table or tables state (reversible)

2016-01-19 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-11018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106461#comment-15106461
 ] 

Marcus Eriksson commented on CASSANDRA-11018:
-

+1

> Drop column in results in corrupted table or tables state (reversible)
> --
>
> Key: CASSANDRA-11018
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11018
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL
> Environment: Debian 3.16.7
>Reporter: Jason Kania
>Assignee: Sylvain Lebresne
>Priority: Minor
> Fix For: 3.0.x, 3.x
>
>
> After dropping a column from a table, that table is no longer accessible from 
> various commands.
> Initial command in cqlsh;
> alter table "sensorUnit" drop "lastCouplingCheckTime";
> no errors were reported:
> Subsequently, the following commands fail as follows:
> > nodetool compact
> root@marble:/var/log/cassandra# nodetool compact
> error: Unknown column lastCouplingCheckTime in table powermon.sensorUnit
> -- StackTrace --
> java.lang.AssertionError: Unknown column lastCouplingCheckTime in table 
> powermon.sensorUnit
> at 
> org.apache.cassandra.db.LegacyLayout.readLegacyAtom(LegacyLayout.java:964)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$AtomIterator.readAtom(UnfilteredDeserializer.java:520)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$AtomIterator.hasNext(UnfilteredDeserializer.java:503)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.readRow(UnfilteredDeserializer.java:446)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:422)
> at 
> org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:289)
> at 
> org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:134)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57)
> at 
> org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:329)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48)
> at 
> org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:65)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100)
> at 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:442)
> at 
> org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47)
> at 
> org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150)
> at 
> org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72)
> at 
> org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:572)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Also I get the following from cqlsh commands:
> cqlsh:sensorTrack> select * from "sensorUnit";
> Traceback (most recent call last):
>   File "/usr/bin/cqlsh.py", line 1258, in perform_simple_statement
> result = future.result()
>   File 
> "/usr/share/cassandra/lib/cassandra-driver-internal-only-3.0.0-6af642d.zip/cassandra-driver-3.0.0-6af642d/cassandra/cluster.py",
>  line 3122, in result
> ra

[jira] [Resolved] (CASSANDRA-11021) Inserting static column fails with secondary index on clustering key

2016-01-19 Thread Sam Tunnicliffe (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam Tunnicliffe resolved CASSANDRA-11021.
-
   Resolution: Fixed
Fix Version/s: 3.3
   3.0.3

Ok, CI looks good so I've committed to 3.0 in 
{{c7cbde218de62aa47d9e942957eebd9c7568}} and merged upwards. Thanks!


> Inserting static column fails with secondary index on clustering key
> 
>
> Key: CASSANDRA-11021
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11021
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local Write-Read Paths
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
> Fix For: 3.0.3, 3.3
>
>
> Creating a secondary index on a clustering key fails with an exception in 
> case a static column is involved.
> {code}
> CREATE TABLE test (k int, t int, s text static, v text, PRIMARY KEY (k, t));
> CREATE INDEX ix ON test (t);
> INSERT INTO test(k, t, s, v) VALUES (0, 1, 'abc', 'def');
> {code}
> {code}
> ERROR [SharedPool-Worker-2] 2016-01-15 11:42:27,484 StorageProxy.java:1336 - 
> Failed to apply mutation locally : {}
> java.lang.ArrayIndexOutOfBoundsException: 0
> at 
> org.apache.cassandra.db.AbstractClusteringPrefix.get(AbstractClusteringPrefix.java:59)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.composites.ClusteringColumnIndex.getIndexedValue(ClusteringColumnIndex.java:58)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.getIndexedValue(CassandraIndex.java:598)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.insert(CassandraIndex.java:490)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex.access$100(CassandraIndex.java:53)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$1.indexPrimaryKey(CassandraIndex.java:437)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.internal.CassandraIndex$1.insertRow(CassandraIndex.java:347)
>  ~[main/:na]
> at 
> org.apache.cassandra.index.SecondaryIndexManager$WriteTimeTransaction.onInserted(SecondaryIndexManager.java:795)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition$RowUpdater.apply(AtomicBTreePartition.java:275)
>  ~[main/:na]
> at 
> org.apache.cassandra.db.partitions.AtomicBTreePartition.addAllWithSizeDelta(AtomicBTreePartition.java:154)
>  ~[main/:na]
> at org.apache.cassandra.db.Memtable.put(Memtable.java:240) ~[main/:na]
> at 
> org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1145) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:494) 
> ~[main/:na]
> at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:384) 
> ~[main/:na]
> at org.apache.cassandra.db.Mutation.apply(Mutation.java:205) 
> ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$$Lambda$166/492512700.run(Unknown 
> Source) ~[na:na]
> at 
> org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1330)
>  ~[main/:na]
> at 
> org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2480)
>  [main/:n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[6/6] cassandra git commit: Merge branch 'cassandra-3.3' into trunk

2016-01-19 Thread samt
Merge branch 'cassandra-3.3' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e13ea8db
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e13ea8db
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e13ea8db

Branch: refs/heads/trunk
Commit: e13ea8db6e0fca9001c2524716176a46b5894412
Parents: 0e98197 456581e
Author: Sam Tunnicliffe 
Authored: Tue Jan 19 09:02:21 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 19 09:02:21 2016 +

--
 CHANGES.txt |  1 +
 .../index/internal/CassandraIndex.java  |  7 ++
 .../validation/entities/SecondaryIndexTest.java | 24 
 3 files changed, 32 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/e13ea8db/CHANGES.txt
--



[2/6] cassandra git commit: Don't try to index clustering values of static rows

2016-01-19 Thread samt
Don't try to index clustering values of static rows

Patch Stefan Podkowinski; reviewed by Sam Tunnicliffe for
CASSANDRA-11021


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7cbde21
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7cbde21
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7cbde21

Branch: refs/heads/cassandra-3.3
Commit: c7cbde218de62aa47d9e942957eebd9c7568
Parents: 8517635
Author: Stefan Podkowinski 
Authored: Mon Jan 18 18:04:00 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 19 08:53:19 2016 +

--
 CHANGES.txt |  1 +
 .../index/internal/CassandraIndex.java  |  7 ++
 .../validation/entities/SecondaryIndexTest.java | 24 
 3 files changed, 32 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7cbde21/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 165e5d1..9c0ab85 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
  * Remove checksum files after replaying hints (CASSANDRA-10947)
  * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)
  * Ensure stale index entries are purged during reads (CASSANDRA-11013)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7cbde21/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
--
diff --git a/src/java/org/apache/cassandra/index/internal/CassandraIndex.java 
b/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
index 6223d8a..158b127 100644
--- a/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
+++ b/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
@@ -342,6 +342,9 @@ public abstract class CassandraIndex implements Index
 
 public void insertRow(Row row)
 {
+if (row.isStatic() != indexedColumn.isStatic())
+return;
+
 if (isPrimaryKeyIndex())
 {
 indexPrimaryKey(row.clustering(),
@@ -370,6 +373,10 @@ public abstract class CassandraIndex implements Index
 
 public void updateRow(Row oldRow, Row newRow)
 {
+assert oldRow.isStatic() == newRow.isStatic();
+if (newRow.isStatic() != indexedColumn.isStatic())
+return;
+
 if (isPrimaryKeyIndex())
 indexPrimaryKey(newRow.clustering(),
 newRow.primaryKeyLivenessInfo(),

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7cbde21/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
index 38402d9..06f1987 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
@@ -960,6 +960,30 @@ public class SecondaryIndexTest extends CQLTester
 assertNull(QueryProcessor.instance.getPreparedForThrift(thriftId));
 }
 
+// See CASSANDRA-11021
+@Test
+public void 
testIndexesOnNonStaticColumnsWhereSchemaIncludesStaticColumns() throws Throwable
+{
+createTable("CREATE TABLE %s (a int, b int, c int static, d int, 
PRIMARY KEY (a, b))");
+createIndex("CREATE INDEX b_idx on %s(b)");
+createIndex("CREATE INDEX d_idx on %s(d)");
+
+execute("INSERT INTO %s (a, b, c ,d) VALUES (0, 0, 0, 0)");
+execute("INSERT INTO %s (a, b, c, d) VALUES (1, 1, 1, 1)");
+assertRows(execute("SELECT * FROM %s WHERE b = 0"), row(0, 0, 0, 0));
+assertRows(execute("SELECT * FROM %s WHERE d = 1"), row(1, 1, 1, 1));
+
+execute("UPDATE %s SET c = 2 WHERE a = 0");
+execute("UPDATE %s SET c = 3, d = 4 WHERE a = 1 AND b = 1");
+assertRows(execute("SELECT * FROM %s WHERE b = 0"), row(0, 0, 2, 0));
+assertRows(execute("SELECT * FROM %s WHERE d = 4"), row(1, 1, 3, 4));
+
+execute("DELETE FROM %s WHERE a = 0");
+execute("DELETE FROM %s WHERE a = 1 AND b = 1");
+assertEmpty(execute("SELECT * FROM %s WHERE b = 0"));
+assertEmpty(execute("SELECT * FROM %s WHERE d = 3"));
+}
+
 private ResultMessage.Prepared prepareStatement(String cql, boolean 
forThrift)
 {
 return QueryProcessor.prepare(Stri

[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-19 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/456581e5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/456581e5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/456581e5

Branch: refs/heads/trunk
Commit: 456581e54b4211216db16ddd203a325f8a309aeb
Parents: 289ad77 c7cbde2
Author: Sam Tunnicliffe 
Authored: Tue Jan 19 08:58:21 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 19 08:58:21 2016 +

--
 CHANGES.txt |  1 +
 .../index/internal/CassandraIndex.java  |  7 ++
 .../validation/entities/SecondaryIndexTest.java | 24 
 3 files changed, 32 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/456581e5/CHANGES.txt
--
diff --cc CHANGES.txt
index 2c19a1b,9c0ab85..4965920
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,6 -1,5 +1,7 @@@
 -3.0.3
 +3.3
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
   * Remove checksum files after replaying hints (CASSANDRA-10947)
   * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)
   * Ensure stale index entries are purged during reads (CASSANDRA-11013)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/456581e5/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
--



[3/6] cassandra git commit: Don't try to index clustering values of static rows

2016-01-19 Thread samt
Don't try to index clustering values of static rows

Patch Stefan Podkowinski; reviewed by Sam Tunnicliffe for
CASSANDRA-11021


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c7cbde21
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c7cbde21
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c7cbde21

Branch: refs/heads/trunk
Commit: c7cbde218de62aa47d9e942957eebd9c7568
Parents: 8517635
Author: Stefan Podkowinski 
Authored: Mon Jan 18 18:04:00 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 19 08:53:19 2016 +

--
 CHANGES.txt |  1 +
 .../index/internal/CassandraIndex.java  |  7 ++
 .../validation/entities/SecondaryIndexTest.java | 24 
 3 files changed, 32 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7cbde21/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 165e5d1..9c0ab85 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 3.0.3
+ * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
  * Remove checksum files after replaying hints (CASSANDRA-10947)
  * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)
  * Ensure stale index entries are purged during reads (CASSANDRA-11013)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7cbde21/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
--
diff --git a/src/java/org/apache/cassandra/index/internal/CassandraIndex.java 
b/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
index 6223d8a..158b127 100644
--- a/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
+++ b/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
@@ -342,6 +342,9 @@ public abstract class CassandraIndex implements Index
 
 public void insertRow(Row row)
 {
+if (row.isStatic() != indexedColumn.isStatic())
+return;
+
 if (isPrimaryKeyIndex())
 {
 indexPrimaryKey(row.clustering(),
@@ -370,6 +373,10 @@ public abstract class CassandraIndex implements Index
 
 public void updateRow(Row oldRow, Row newRow)
 {
+assert oldRow.isStatic() == newRow.isStatic();
+if (newRow.isStatic() != indexedColumn.isStatic())
+return;
+
 if (isPrimaryKeyIndex())
 indexPrimaryKey(newRow.clustering(),
 newRow.primaryKeyLivenessInfo(),

http://git-wip-us.apache.org/repos/asf/cassandra/blob/c7cbde21/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
--
diff --git 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
index 38402d9..06f1987 100644
--- 
a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
+++ 
b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java
@@ -960,6 +960,30 @@ public class SecondaryIndexTest extends CQLTester
 assertNull(QueryProcessor.instance.getPreparedForThrift(thriftId));
 }
 
+// See CASSANDRA-11021
+@Test
+public void 
testIndexesOnNonStaticColumnsWhereSchemaIncludesStaticColumns() throws Throwable
+{
+createTable("CREATE TABLE %s (a int, b int, c int static, d int, 
PRIMARY KEY (a, b))");
+createIndex("CREATE INDEX b_idx on %s(b)");
+createIndex("CREATE INDEX d_idx on %s(d)");
+
+execute("INSERT INTO %s (a, b, c ,d) VALUES (0, 0, 0, 0)");
+execute("INSERT INTO %s (a, b, c, d) VALUES (1, 1, 1, 1)");
+assertRows(execute("SELECT * FROM %s WHERE b = 0"), row(0, 0, 0, 0));
+assertRows(execute("SELECT * FROM %s WHERE d = 1"), row(1, 1, 1, 1));
+
+execute("UPDATE %s SET c = 2 WHERE a = 0");
+execute("UPDATE %s SET c = 3, d = 4 WHERE a = 1 AND b = 1");
+assertRows(execute("SELECT * FROM %s WHERE b = 0"), row(0, 0, 2, 0));
+assertRows(execute("SELECT * FROM %s WHERE d = 4"), row(1, 1, 3, 4));
+
+execute("DELETE FROM %s WHERE a = 0");
+execute("DELETE FROM %s WHERE a = 1 AND b = 1");
+assertEmpty(execute("SELECT * FROM %s WHERE b = 0"));
+assertEmpty(execute("SELECT * FROM %s WHERE d = 3"));
+}
+
 private ResultMessage.Prepared prepareStatement(String cql, boolean 
forThrift)
 {
 return QueryProcessor.prepare(String.forma

[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.3

2016-01-19 Thread samt
Merge branch 'cassandra-3.0' into cassandra-3.3


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/456581e5
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/456581e5
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/456581e5

Branch: refs/heads/cassandra-3.3
Commit: 456581e54b4211216db16ddd203a325f8a309aeb
Parents: 289ad77 c7cbde2
Author: Sam Tunnicliffe 
Authored: Tue Jan 19 08:58:21 2016 +
Committer: Sam Tunnicliffe 
Committed: Tue Jan 19 08:58:21 2016 +

--
 CHANGES.txt |  1 +
 .../index/internal/CassandraIndex.java  |  7 ++
 .../validation/entities/SecondaryIndexTest.java | 24 
 3 files changed, 32 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/456581e5/CHANGES.txt
--
diff --cc CHANGES.txt
index 2c19a1b,9c0ab85..4965920
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,6 -1,5 +1,7 @@@
 -3.0.3
 +3.3
 + * Avoid bootstrap hanging when existing nodes have no data to stream 
(CASSANDRA-11010)
 +Merged from 3.0:
+  * Don't attempt to index clustering values of static rows (CASSANDRA-11021)
   * Remove checksum files after replaying hints (CASSANDRA-10947)
   * Support passing base table metadata to custom 2i validation 
(CASSANDRA-10924)
   * Ensure stale index entries are purged during reads (CASSANDRA-11013)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/456581e5/src/java/org/apache/cassandra/index/internal/CassandraIndex.java
--



  1   2   >