[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467079#comment-16467079
 ] 

Per Otterström commented on CASSANDRA-12151:


bq. This is an intersting idea. However, I'm reticent to introduce CQL grammar 
changes this late into this ticket. As a followup, I think this could be worth 
exploring.

Sure, I understand we seek to close this ticket. I'm just a bit concerned with 
the timing. If this ticket is merged as is and we take a cut for 4.0, then I 
assume we will have to stick to this way of configure audit logs for some time.

bq. While there's some conceptual overlap, I don't think it's a good idea to 
try to force the concepts of 'audit logging' and 'auth' onto one another here.

I believe it depends on the use case for doing audits. In some cases it would 
make perfect sense to have these two concepts in sync. E.g. I want to grant a 
user to have SELECT permission on a specific table. Then I would grant the same 
user to have NOAUDIT for SELECT on the very same table. Shoudl the user try to 
read somethign else, or write to that table, and audit entry would be created. 
In a case like this it is very helpful for the admin to use the same concepts 
for these two "permissions".

To verify accuracy, should we create some dtests for this as well?

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Resolved] (CASSANDRA-13182) test failure in sstableutil_test.SSTableUtilTest.compaction_test

2018-05-08 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves resolved CASSANDRA-13182.
--
Resolution: Fixed

> test failure in sstableutil_test.SSTableUtilTest.compaction_test
> 
>
> Key: CASSANDRA-13182
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13182
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Assignee: Lerh Chuan Low
>Priority: Major
>  Labels: dtest, test-failure, test-failure-fresh
> Attachments: node1.log, node1_debug.log, node1_gc.log
>
>
> example failure:
> http://cassci.datastax.com/job/trunk_novnode_dtest/506/testReport/sstableutil_test/SSTableUtilTest/compaction_test
> {noformat}
> Error Message
> Lists differ: ['/tmp/dtest-Rk_3Cs/test/node1... != 
> ['/tmp/dtest-Rk_3Cs/test/node1...
> First differing element 8:
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db'
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db'
> First list contains 7 additional elements.
> First extra element 24:
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db'
>   
> ['/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data0/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-4-big-TOC.txt',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-CRC.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Digest.crc32',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Filter.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Index.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Statistics.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-Summary.db',
> -  
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-2-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-Summary.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data1/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-6-big-TOC.txt',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-CRC.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Data.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Digest.crc32',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Filter.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Index.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Statistics.db',
>
> '/tmp/dtest-Rk_3Cs/test/node1/data2/keyspace1/standard1-11ee2450e8ab11e6b5a68de39eb517c4/mc-5-big-Summar

[jira] [Commented] (CASSANDRA-14427) Bump jackson version to >= 2.9.5

2018-05-08 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467245#comment-16467245
 ] 

Jason Brown commented on CASSANDRA-14427:
-

Upgrading dependencies on trunk is not a problem, but usually we are hesitant 
to upgrade in existing releases. Unless, of course, there's a bug or identifies 
security problem. The linked CVEs all reference the jackson-databind 
sub-module, which we do not ship. Several of the CVEs have text like "sending 
the maliciously crafted input to the readValue method of the ObjectMapper", but 
always with that input going through the jackson-databind component. While the 
jackson-mapper-asl-1.9.13.jar that we ship does have an {{ObjectMapper}} class, 
so does the jackson-databind jar (I looked at the most recent on maven central, 
v2.9.5). I'm inclined to believe the {{ObjectMapper}} referenced in the CVEs 
refers to the {{ObjectMapper}} in jackson-databind, and not anything we 
currently ship.

If there are no known issues with the current jackson jars, I propose we not 
upgrade them on existing releases. wdyt, [~Lerh Low]?



> Bump jackson version to >= 2.9.5
> 
>
> Key: CASSANDRA-14427
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14427
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Lerh Chuan Low
>Assignee: Lerh Chuan Low
>Priority: Major
> Attachments: 2.1-14427.txt, 2.2-14427.txt, 3.0-14427.txt, 
> 3.X-14427.txt, trunk-14427.txt
>
>
> The Jackson being used by Cassandra is really old (1.9.2, and still 
> references codehaus (Jackson 1) instead of fasterxml (Jackson 2)). 
> There have been a few jackson vulnerabilities recently (mostly around 
> deserialization which allows arbitrary code execution)
> [https://nvd.nist.gov/vuln/detail/CVE-2017-7525]
>  [https://nvd.nist.gov/vuln/detail/CVE-2017-15095]
>  [https://nvd.nist.gov/vuln/detail/CVE-2018-1327]
>  [https://nvd.nist.gov/vuln/detail/CVE-2018-7489]
> Given that Jackson in Cassandra is really old and seems to be used also for 
> reading in values, it looks worthwhile to update Jackson to 2.9.5. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12151) Audit logging for database activity

2018-05-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467079#comment-16467079
 ] 

Per Otterström edited comment on CASSANDRA-12151 at 5/8/18 11:25 AM:
-

bq. This is an intersting idea. However, I'm reticent to introduce CQL grammar 
changes this late into this ticket. As a followup, I think this could be worth 
exploring.

Sure, I understand we seek to close this ticket. I'm just a bit concerned with 
the timing. If this ticket is merged as is and we take a cut for 4.0, then I 
assume we will have to stick to this way of configure audit logs for some time.

bq. While there's some conceptual overlap, I don't think it's a good idea to 
try to force the concepts of 'audit logging' and 'auth' onto one another here.

I believe it depends on the use case for doing audits. In some cases it would 
make perfect sense to have these two concepts in sync. E.g. I want to grant a 
user to have SELECT permission on a specific table. Then I would grant the same 
user to have NOAUDIT for SELECT on the very same table. Should the user try to 
read somethign else, or write to that table, an audit entry would be created. 
In a case like this it is very helpful for the admin to use the same concepts 
for these two "permissions".

To verify accuracy, should we create some dtests for this as well?


was (Author: eperott):
bq. This is an intersting idea. However, I'm reticent to introduce CQL grammar 
changes this late into this ticket. As a followup, I think this could be worth 
exploring.

Sure, I understand we seek to close this ticket. I'm just a bit concerned with 
the timing. If this ticket is merged as is and we take a cut for 4.0, then I 
assume we will have to stick to this way of configure audit logs for some time.

bq. While there's some conceptual overlap, I don't think it's a good idea to 
try to force the concepts of 'audit logging' and 'auth' onto one another here.

I believe it depends on the use case for doing audits. In some cases it would 
make perfect sense to have these two concepts in sync. E.g. I want to grant a 
user to have SELECT permission on a specific table. Then I would grant the same 
user to have NOAUDIT for SELECT on the very same table. Shoudl the user try to 
read somethign else, or write to that table, and audit entry would be created. 
In a case like this it is very helpful for the admin to use the same concepts 
for these two "permissions".

To verify accuracy, should we create some dtests for this as well?

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14419) Resume compresed hints delivery broken

2018-05-08 Thread Tommy Stendahl (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467267#comment-16467267
 ] 

Tommy Stendahl commented on CASSANDRA-14419:


I have found some conflicts from CASSANDRA-5863 and fixed them, but I'm not 
sure I have found everything yet but at least the unit tests are are working 
now.

My latest version of the commit is here: 
[cassandra-14419-30|https://github.com/tommystendahl/cassandra/tree/cassandra-14419-30]

I had to remove {{HintsServiceTest.java}} since it depended on CASSANDRA-12016 
which is not availible on the 3.0 branch but I have a local branch where I have 
that test working. I have also modified it to test with compression on and it 
works fine.

> Resume compresed hints delivery broken
> --
>
> Key: CASSANDRA-14419
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14419
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Tommy Stendahl
>Priority: Blocker
>
> We are using Cassandra 3.0.15 and are using compressed hints, but if hint 
> delivery is interrupted resuming hint delivery is failing.
> {code}
> 2018-04-04T13:27:48.948+0200 ERROR [HintsDispatcher:14] 
> CassandraDaemon.java:207 Exception in thread Thread[HintsDispatcher:14,1,main]
> java.lang.IllegalArgumentException: Unable to seek to position 1789149057 in 
> /var/lib/cassandra/hints/9592c860-1054-4c60-b3b8-faa9adc6d769-1522838912649-1.hints
>  (118259682 bytes) in read-only mode
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:287)
>  ~[apache-cassandra-clientutil-3.0.15.jar:3.0.15]
>     at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:83) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:263)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:248)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:226)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:205)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_152]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_152]
>     at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.15.jar:3.0.15]
>     at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_152]
> {code}
>  I think the problem is similar to CASSANDRA-11960.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-14419) Resume compresed hints delivery broken

2018-05-08 Thread Tommy Stendahl (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommy Stendahl reassigned CASSANDRA-14419:
--

Assignee: Tommy Stendahl

> Resume compresed hints delivery broken
> --
>
> Key: CASSANDRA-14419
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14419
> Project: Cassandra
>  Issue Type: Bug
>  Components: Hints
>Reporter: Tommy Stendahl
>Assignee: Tommy Stendahl
>Priority: Blocker
>
> We are using Cassandra 3.0.15 and are using compressed hints, but if hint 
> delivery is interrupted resuming hint delivery is failing.
> {code}
> 2018-04-04T13:27:48.948+0200 ERROR [HintsDispatcher:14] 
> CassandraDaemon.java:207 Exception in thread Thread[HintsDispatcher:14,1,main]
> java.lang.IllegalArgumentException: Unable to seek to position 1789149057 in 
> /var/lib/cassandra/hints/9592c860-1054-4c60-b3b8-faa9adc6d769-1522838912649-1.hints
>  (118259682 bytes) in read-only mode
>     at 
> org.apache.cassandra.io.util.RandomAccessReader.seek(RandomAccessReader.java:287)
>  ~[apache-cassandra-clientutil-3.0.15.jar:3.0.15]
>     at org.apache.cassandra.hints.HintsReader.seek(HintsReader.java:114) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatcher.seek(HintsDispatcher.java:83) 
> ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:263)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:248)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:226)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:205)
>  ~[apache-cassandra-3.0.15.jar:3.0.15]
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[na:1.8.0_152]
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[na:1.8.0_152]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_152]
>     at 
> org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:79)
>  [apache-cassandra-3.0.15.jar:3.0.15]
>     at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_152]
> {code}
>  I think the problem is similar to CASSANDRA-11960.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467321#comment-16467321
 ] 

Stefan Podkowinski commented on CASSANDRA-14298:


I was giving your patch a try on builds.apache.org against cassandra-3.0:
[https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/545/#showFailuresLink]

Looks like there's still an encoding issue while piping input to the cqlsh 
subprocess. Except that, most of the tests seem to pass.

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298.txt, CASSANDRA-14298_old.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14439) ClassCastException with mixed 1.2.18 + 2.0.17 environment

2018-05-08 Thread Dariusz Cieslak (JIRA)
Dariusz Cieslak created CASSANDRA-14439:
---

 Summary: ClassCastException with mixed 1.2.18  + 2.0.17 environment
 Key: CASSANDRA-14439
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14439
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Mixed 1.2.18  + 2.0.17 environment, used temporarily 
during incremental migration process.
Reporter: Dariusz Cieslak
 Attachments: cassandra-class-cast-exception-fix.patch

During mixed versions tests I've discovered that mixed 1.2.18  + 2.0.17 
environment gives the following exception on 2.0.17 node:

{code}
java.lang.ClassCastException: org.apache.cassandra.db.SliceByNamesReadCommand 
cannot be cast to org.apache.cassandra.db.SliceFromReadCommand
at 
org.apache.cassandra.db.SliceFromReadCommandSerializer.serializedSize(SliceFromReadCommand.java:242)
at 
org.apache.cassandra.db.ReadCommandSerializer.serializedSize(ReadCommand.java:204)
at 
org.apache.cassandra.db.ReadCommandSerializer.serializedSize(ReadCommand.java:134)
at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:116)
at 
org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:251)
at 
org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:203)
at 
org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:151)
{code}

The exception is caused by inconsistent commandType handling in 
ReadCommandSerializer(), 

{code}
out.writeByte(newCommand.commandType.serializedValue);
switch (command.commandType) /// <--- WHY NOT 
newCommand.commandType -- DCI ???
{
case GET_BY_NAMES:
SliceByNamesReadCommand.serializer.serialize(newCommand, 
superColumn, out, version);
break;
case GET_SLICES:
SliceFromReadCommand.serializer.serialize(newCommand, 
superColumn, out, version);
break;
default:
throw new AssertionError();
}
{code}

Proposed fix (also attached as a patch):

{code}
diff --git a/src/java/org/apache/cassandra/db/ReadCommand.java 
b/src/java/org/apache/cassandra/db/ReadCommand.java
index cadcd7d..f2153e8 100644
--- a/src/java/org/apache/cassandra/db/ReadCommand.java
+++ b/src/java/org/apache/cassandra/db/ReadCommand.java
@@ -153,7 +153,7 @@ class ReadCommandSerializer implements 
IVersionedSerializer
 }
 
 out.writeByte(newCommand.commandType.serializedValue);
-switch (command.commandType)
+switch (newCommand.commandType)
 {
 case GET_BY_NAMES:
 SliceByNamesReadCommand.serializer.serialize(newCommand, 
superColumn, out, version);
@@ -196,7 +196,7 @@ class ReadCommandSerializer implements 
IVersionedSerializer
 }
 }
 
-switch (command.commandType)
+switch (newCommand.commandType)
 {
 case GET_BY_NAMES:
 return 1 + 
SliceByNamesReadCommand.serializer.serializedSize(newCommand, superColumn, 
version);
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467481#comment-16467481
 ] 

Ariel Weisberg commented on CASSANDRA-10789:


A couple of thoughts. This isn't persistent and it's per node not for the 
entire cluster. I sort of see the ideal version of this feature being a table 
you can insert into via CQL and all the nodes pick up on changes to the table 
and eventually kick the clients. Putting them into a table also means that you 
can read back what is currently blocked.

Would it be valuable to be able to provide the capability to ban ranges of IPs?

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Stefan Podkowinski (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467485#comment-16467485
 ] 

Stefan Podkowinski commented on CASSANDRA-10789:


What kind of abusive client behaviour are we talking about exactly? Why is IP 
blocking the right solution compared to throttling or connection limiting?

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2018-05-08 Thread Aleksey Yeschenko (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467507#comment-16467507
 ] 

Aleksey Yeschenko commented on CASSANDRA-7622:
--

Pushed part of virtual/read metadata separation 
[here|https://github.com/iamaleksey/cassandra/commits/7622-4.0]. I'm now 
sufficiently satisfied with it to move on to CQL integration review/edit.

> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Chris Lohfink
>Priority: Major
> Fix For: 4.x
>
> Attachments: screenshot-1.png
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Ben Bromhead (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467541#comment-16467541
 ] 

Ben Bromhead commented on CASSANDRA-10789:
--

[~aweisberg]

Given we allow per node management of non-persistent settings via nodetool 
(e.g. compactionthroughput etc) and we might not want to blacklist clients on 
all nodes (e.g. only specific DCs) I think doing this at a per node level makes 
sense.

If it becomes a burden for an operations perspective, then we can improve on 
the underlying work.

[~spo...@gmail.com] - Throttling and connection limiting solves a different 
problem.
 * Throttling/Limiting = I want to enforce good behavior on my clients.
 * Blocking = I don't want a client to connect to this node, no matter how well 
behaved it is.

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Ben Bromhead (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467541#comment-16467541
 ] 

Ben Bromhead edited comment on CASSANDRA-10789 at 5/8/18 3:15 PM:
--

[~aweisberg] - Given we allow per node management of non-persistent settings 
via nodetool (e.g. compactionthroughput etc) and we might not want to blacklist 
clients on all nodes (e.g. only specific DCs) I think doing this at a per node 
level makes sense.

If it becomes a burden for an operations perspective, then we can improve on 
the underlying work? I'd love to see this land without having to wait on an 
underlying in Cassandra coordination mechanism. 

[~spo...@gmail.com] - Throttling and connection limiting solves a different 
problem.
 * Throttling/Limiting = I want to enforce good behavior on my clients.
 * Blocking = I don't want a client to connect to this node, no matter how well 
behaved it is.


was (Author: benbromhead):
[~aweisberg]

Given we allow per node management of non-persistent settings via nodetool 
(e.g. compactionthroughput etc) and we might not want to blacklist clients on 
all nodes (e.g. only specific DCs) I think doing this at a per node level makes 
sense.

If it becomes a burden for an operations perspective, then we can improve on 
the underlying work.

[~spo...@gmail.com] - Throttling and connection limiting solves a different 
problem.
 * Throttling/Limiting = I want to enforce good behavior on my clients.
 * Blocking = I don't want a client to connect to this node, no matter how well 
behaved it is.

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12271) NonSystemKeyspaces jmx attribute needs to return jre list

2018-05-08 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467606#comment-16467606
 ] 

Edward Ribeiro commented on CASSANDRA-12271:


Hi [~michaelsembwever], thanks for taking a look at this little contribution. 
:) I read the CHANGES.txt and it looks good and the patch is correct. So, it's 
ready to commit, imho.

> NonSystemKeyspaces jmx attribute needs to return jre list
> -
>
> Key: CASSANDRA-12271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12271
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: Edward Ribeiro
>Priority: Major
>  Labels: lhf
> Fix For: 4.0, 3.11.3
>
> Attachments: CASSANDRA-12271.patch, screenshot-1.png, screenshot-2.png
>
>
> If you dont have right guava in classpath you cant query the 
> NonSystemKeyspaces attribute. i.e. jconsole. can reproduce using Swiss java 
> knife:
> {code}
> # java -jar sjk.jar mx -s localhost:7199 -mg -b 
> "org.apache.cassandra.db:type=StorageService" -f NonSystemKeyspaces
> org.apache.cassandra.db:type=StorageService
> java.rmi.UnmarshalException: error unmarshalling return; nested exception is: 
>   java.lang.ClassNotFoundException: 
> com.google.common.collect.ImmutableList$SerializedForm (no security manager: 
> RMI class loader disabled)
> {code}
> If return a ArrayList or LinkedList or anything in JRE this will be fixed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-7622) Implement virtual tables

2018-05-08 Thread Benjamin Lerer (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467652#comment-16467652
 ] 

Benjamin Lerer commented on CASSANDRA-7622:
---

 [here|https://github.com/apache/cassandra/compare/trunk...blerer:7622-trunk] 
is the patch I promised.

> Implement virtual tables
> 
>
> Key: CASSANDRA-7622
> URL: https://issues.apache.org/jira/browse/CASSANDRA-7622
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL
>Reporter: Tupshin Harper
>Assignee: Chris Lohfink
>Priority: Major
> Fix For: 4.x
>
> Attachments: screenshot-1.png
>
>
> There are a variety of reasons to want virtual tables, which would be any 
> table that would be backed by an API, rather than data explicitly managed and 
> stored as sstables.
> One possible use case would be to expose JMX data through CQL as a 
> resurrection of CASSANDRA-3527.
> Another is a more general framework to implement the ability to expose yaml 
> configuration information. So it would be an alternate approach to 
> CASSANDRA-7370.
> A possible implementation would be in terms of CASSANDRA-7443, but I am not 
> presupposing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14437) SSTableLoader does not work when "internode_encryption : all" is set

2018-05-08 Thread Paul Cheon (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467717#comment-16467717
 ] 

Paul Cheon commented on CASSANDRA-14437:


When I display the sstableloader help page, there is no option for 
internode_encryption for sstableloader.

{noformat}
pcheon@yvr-paul-cas003:~$ sstableloader --help
usage: sstableloader [options] 

Bulk load the sstables found in the directory  to the configured
cluster.The parent directories of  are used as the target
keyspace/table name. So for instance, to load an sstable named
Standard1-g-1-Data.db into Keyspace1/Standard1, you will need to have the
files Standard1-g-1-Data.db and Standard1-g-1-Index.db into a directory
/path/to/Keyspace1/Standard1/.
 -alg,--ssl-alg   Client SSL: algorithm
 (default: SunX509)
 -ap,--auth-provider  custom AuthProvider
 class name for cassandra 
authentication
 -ciphers,--ssl-ciphers   Client SSL:
 comma-separated list of 
encryption suites to use
 -cph,--connections-per-host number of concurrent
 connections-per-host.
 -d,--nodes   Required. try to
 connect to these hosts 
(comma separated) initially for ring information
 -f,--conf-path cassandra.yaml file
 path for streaming 
throughput and client/server SSL.
 -h,--help   display this help
 message
 -i,--ignore  don't stream to this
 (comma separated) list of 
nodes
 -idct,--inter-dc-throttleinter-datacenter
 throttle speed in Mbits 
(default unlimited)
 -ks,--keystoreClient SSL: full path
 to keystore
 -kspw,--keystore-passwordClient SSL: password
 of the keystore
--no-progressdon't display
 progress
 -p,--portport used for native
 connection (default 9042)
 -prtcl,--ssl-protocol Client SSL:
 connections protocol to 
use (default: TLS)
 -pw,--passwordpassword for
 cassandra authentication
 -sp,--storage-portport used for
 internode communication 
(default 7000)
 -ssp,--ssl-storage-port   port used for TLS
 internode communication 
(default 7001)
 -st,--store-typeClient SSL: type of
 store
 -t,--throttle throttle speed in
 Mbits (default unlimited)
 -ts,--truststoreClient SSL: full path
 to truststore
 -tspw,--truststore-passwordClient SSL: password
 of the truststore
 -u,--username username for
 cassandra authentication
 -v,--verboseverbose output

You can provide cassandra.yaml file with -f command line option to set up
streaming throughput, client and server encryption options. Only
stream_throughput_outbound_megabits_per_sec, server_encryption_options and
client_encryption_options are read from yaml. You can override options
read from cassandra.yaml with corresponding command line options.
pcheon@yvr-paul-cas003:~$
{noformat}

How can I enforce sstableloader to use internode_encryption settings on command 
line?



> SSTableLoader does not work when "internode_encryption : all" is set
> 
>
> Key: CASSANDRA-14437
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14437
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Paul Cheon
>Priority: Major
> Fix For: 3.11.2
>
>
> I am trying to use sstableloader to restore snapshot.
> If "internode_encryption :  all" is se

[jira] [Created] (CASSANDRA-14440) we are seeing an 0 results when we select the record as soon as we insert

2018-05-08 Thread Umadevi Nalluri (JIRA)
Umadevi Nalluri created CASSANDRA-14440:
---

 Summary: we are seeing an 0 results when we select the record as 
soon as we insert
 Key: CASSANDRA-14440
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14440
 Project: Cassandra
  Issue Type: Bug
 Environment: dev 
Reporter: Umadevi Nalluri


we are seeing an 0 results when we select the record as soon as we insert, Not 
sure why cassandra can't see the repcord immediately my insert was successful 
but after 10 sec record is able to retrieve. please help me what should we do 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14440) we are seeing an 0 results when we select the record as soon as we insert

2018-05-08 Thread Jaydeepkumar Chovatia (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467911#comment-16467911
 ] 

Jaydeepkumar Chovatia commented on CASSANDRA-14440:
---

What is the Consistency you are using when you insert your data and while 
reading? Did you make sure no exceptions during write operation? 

Also please go through [this 
artical|https://academy.datastax.com/support-blog/dude-where%E2%80%99s-my-data] 
to make sure none of this is a problem for you.

> we are seeing an 0 results when we select the record as soon as we insert
> -
>
> Key: CASSANDRA-14440
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14440
> Project: Cassandra
>  Issue Type: Bug
> Environment: dev 
>Reporter: udkantheti
>Priority: Major
>
> we are seeing an 0 results when we select the record as soon as we insert, 
> Not sure why cassandra can't see the repcord immediately my insert was 
> successful but after 10 sec record is able to retrieve. please help me what 
> should we do 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Ariel Weisberg (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467941#comment-16467941
 ] 

Ariel Weisberg commented on CASSANDRA-10789:


Blacklisting only on specific DCs is kind of orthogonal because you can have 
this functionality be per DC. In fact there is another ticket for the DC 
specific issue already that uses authz. It's CASSANDRA-13985

While I am underwhelmed by how this is supposed to work I don't think it not be 
persistent and or queryable introduces any technical debt. It does mean that if 
we release 4.0 without it then 4.0 will never have it which is kind of a pain. 
It seems like we are pushing the complexity of maintaining these lists to 
operational tools outside the database which are not open source or shipped 
with the database.

I don't see how it's useful to blacklist a client at just one node when it's 
going to connect to all nodes.

We should at least use a set of blacklisted hosts rather than iterating a list. 
Connection tracker should probably be updated to be a Multimap so we can look 
up the connections to kill without iterating.

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Ben Bromhead (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467993#comment-16467993
 ] 

Ben Bromhead commented on CASSANDRA-10789:
--

I agree with your point on having it query-able (e.g. I would advocate doing 
via JMX, again to keep things simple).

CASSANDRA-13985 - Can do similar things but at the user level. I wish each 
client instance had an individual user and rotated credentials, but this is not 
normally the case.

The current suite of tools and admin commands that Cassandra supports at the 
moment pushes this kind of coordination to external tools and I'm not sure it's 
worth waiting for internal management of cluster wide commands unless they are 
just round the corner (which would be awesome).

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467992#comment-16467992
 ] 

Patrick Bannister commented on CASSANDRA-14298:
---

The encoding issue sounds no good - I'll take another look.

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298.txt, CASSANDRA-14298_old.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Ben Bromhead (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16467993#comment-16467993
 ] 

Ben Bromhead edited comment on CASSANDRA-10789 at 5/8/18 9:34 PM:
--

I agree with your point on having it query-able (e.g. I would advocate doing 
via JMX, again to keep things simple).

CASSANDRA-13985 - Can do similar things but at the user level. I wish 
application that used Cassandra would ensure each client instance had an 
individual user set of credentials and rotated secrets etc, but based on our 
experience this is not normally the case.

The current suite of tools and admin commands that Cassandra supports at the 
moment pushes this kind of coordination to external tools and I'm not sure it's 
worth waiting for internal management of cluster wide commands unless they are 
just round the corner (which would be awesome).


was (Author: benbromhead):
I agree with your point on having it query-able (e.g. I would advocate doing 
via JMX, again to keep things simple).

CASSANDRA-13985 - Can do similar things but at the user level. I wish each 
client instance had an individual user and rotated credentials, but this is not 
normally the case.

The current suite of tools and admin commands that Cassandra supports at the 
moment pushes this kind of coordination to external tools and I'm not sure it's 
worth waiting for internal management of cluster wide commands unless they are 
just round the corner (which would be awesome).

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14405) Transient Replication: Metadata refactor

2018-05-08 Thread Blake Eggleston (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468072#comment-16468072
 ] 

Blake Eggleston commented on CASSANDRA-14405:
-

Initial refactor is here: 
[https://github.com/bdeggleston/cassandra/tree/14405-replicas]. All utests and 
dtests are passing.

The main change is the introduction of the Replica class. When a replication 
strategy is asked for the replicas for a given token, instead of returning a 
collection of endpoint addresses, it returns a collection of Replica objects. 
In addition to the endpoint information, the replica object also contains the 
transient/full status, and the token range it’s a replica for. Most 
AbstractReplicationStrategy methods that returned InetAddressAndPort or 
Range objects, as well as the methods that depend on them, now return 
Replica objects.

I also added a collection like Replicas class, to be used in lieu of 
Collection. During the refactor, almost all bugs were due to calls to 
contains/remove no longer totally making sense. Since the Replica class adds 
additional information, calls to contains/remove become a bit ambiguous, since 
you’re often interested in whether a collection of replicas contains a specific 
endpoint or range, not a specific combination of endpoint/range/transient. So, 
the Replicas class extends Iterable and forces you to be explicit 
about what exactly you’re interested in the collection containing, or removing 
from it.

 

> Transient Replication: Metadata refactor
> 
>
> Key: CASSANDRA-14405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14405
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core, Distributed Metadata, Documentation and Website
>Reporter: Ariel Weisberg
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> Add support to CQL and NTS for configuring keyspaces to have transient 
> replicas.
> Add syntax allowing a keyspace using NTS to declare some replicas in each DC 
> as transient.
> Implement metadata internal to the DB so that it's possible to identify what 
> replicas are transient for a given token or range.
> Introduce Replica which is an InetAddressAndPort and a boolean indicating 
> whether the replica is transient. ReplicatedRange which is a wrapper around a 
> Range that indicates if the range is transient.
> Block altering of keyspaces to use transient replication if they already 
> contain MVs or 2i.
> Block the creation of MV or 2i in keyspaces using transient replication.
> Block the creation/alteration of keyspaces using transient replication if the 
> experimental flag is not set.
> Update web site, CQL spec, and any other documentation for the new syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14405) Transient Replication: Metadata refactor

2018-05-08 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14405:

Reviewer: Ariel Weisberg

> Transient Replication: Metadata refactor
> 
>
> Key: CASSANDRA-14405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14405
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core, Distributed Metadata, Documentation and Website
>Reporter: Ariel Weisberg
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> Add support to CQL and NTS for configuring keyspaces to have transient 
> replicas.
> Add syntax allowing a keyspace using NTS to declare some replicas in each DC 
> as transient.
> Implement metadata internal to the DB so that it's possible to identify what 
> replicas are transient for a given token or range.
> Introduce Replica which is an InetAddressAndPort and a boolean indicating 
> whether the replica is transient. ReplicatedRange which is a wrapper around a 
> Range that indicates if the range is transient.
> Block altering of keyspaces to use transient replication if they already 
> contain MVs or 2i.
> Block the creation of MV or 2i in keyspaces using transient replication.
> Block the creation/alteration of keyspaces using transient replication if the 
> experimental flag is not set.
> Update web site, CQL spec, and any other documentation for the new syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14405) Transient Replication: Metadata refactor

2018-05-08 Thread Blake Eggleston (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Blake Eggleston updated CASSANDRA-14405:

Status: Patch Available  (was: Open)

> Transient Replication: Metadata refactor
> 
>
> Key: CASSANDRA-14405
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14405
> Project: Cassandra
>  Issue Type: Sub-task
>  Components: Core, Distributed Metadata, Documentation and Website
>Reporter: Ariel Weisberg
>Assignee: Blake Eggleston
>Priority: Major
> Fix For: 4.0
>
>
> Add support to CQL and NTS for configuring keyspaces to have transient 
> replicas.
> Add syntax allowing a keyspace using NTS to declare some replicas in each DC 
> as transient.
> Implement metadata internal to the DB so that it's possible to identify what 
> replicas are transient for a given token or range.
> Introduce Replica which is an InetAddressAndPort and a boolean indicating 
> whether the replica is transient. ReplicatedRange which is a wrapper around a 
> Range that indicates if the range is transient.
> Block altering of keyspaces to use transient replication if they already 
> contain MVs or 2i.
> Block the creation of MV or 2i in keyspaces using transient replication.
> Block the creation/alteration of keyspaces using transient replication if the 
> experimental flag is not set.
> Update web site, CQL spec, and any other documentation for the new syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity

2018-05-08 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468087#comment-16468087
 ] 

Jason Brown commented on CASSANDRA-12151:
-

pushed a few trivial clean up patches (includes some of the things [~eperott] 
reported), and running dtests and utests:

||12151||
|[branch|https://github.com/jasobrown/cassandra/tree/trunk_CASSANDRA-12151]|
|[utests & 
dtests|https://circleci.com/gh/jasobrown/workflows/cassandra/tree/trunk_CASSANDRA-12151]|
||

bq. should we create some dtests for this as well?

I thought about that,as well. However, as this feature is not distributed, we 
can test all the functionality via unit tests. In fact, [~vinaykumarcse] 
already has a bunch of unit tests already in this patch. I'm going to give this 
patch one last major review, and will see if we need more tests.


> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14441) Materialized view is not deleting/updating data when made changes in base table

2018-05-08 Thread SWAPNIL BALWANT BHISEY (JIRA)
SWAPNIL BALWANT BHISEY created CASSANDRA-14441:
--

 Summary: Materialized view is not deleting/updating data when made 
changes in base table
 Key: CASSANDRA-14441
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14441
 Project: Cassandra
  Issue Type: Bug
  Components: Materialized Views
Reporter: SWAPNIL BALWANT BHISEY
 Fix For: 3.11.x


we have seen issue in mat view for 3.11.1 where mat view

1) we have inserted a row in test table and the same recored is in test_mat 
table, with Enabled = true,
2) when I update the same record with Enabled = False, a new row is created in 
test_mat table(one with true and one with false) but in test table original 
record got updated to FALSE.
3) when I delete the record using Feature UUID then only the record with Fales 
is getting deleted in both the tables. however I can see the TRUE record in 
test_mat table.

Issue is not reproducible in 3.11.2
Steps


CREATE TABLE test ( 
 feature_uuid uuid, 
 namespace text, 
 feature_name text, 
 allocation_type text, 
 description text, 
 enabled boolean, 
 expiration_dt timestamp, 
 last_modified_dt timestamp, 
 last_modified_user text, 
 persist_allocations boolean, 
 rule text, 
 PRIMARY KEY (feature_uuid, namespace, feature_name, allocation_type) 
) WITH CLUSTERING ORDER BY (namespace ASC, feature_name ASC, allocation_type 
ASC) 
 AND bloom_filter_fp_chance = 0.01 
 AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
 AND comment = '' 
 AND compaction = \{'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'} 
 AND compression = \{'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'} 
 AND crc_check_chance = 1.0 
 AND dclocal_read_repair_chance = 0.3 
 AND default_time_to_live = 63072000 
 AND gc_grace_seconds = 864000 
 AND max_index_interval = 2048 
 AND memtable_flush_period_in_ms = 0 
 AND min_index_interval = 128 
 AND read_repair_chance = 0.3 
 AND speculative_retry = '99PERCENTILE'; 
 
CREATE MATERIALIZED VIEW test_mat AS 
 SELECT allocation_type, enabled, feature_uuid, namespace, feature_name, 
last_modified_dt, last_modified_user, persist_allocations, rule 
 FROM test
 WHERE feature_uuid IS NOT NULL AND allocation_type IS NOT NULL AND namespace 
IS NOT NULL AND feature_name IS NOT NULL AND enabled IS NOT NULL 
 PRIMARY KEY (allocation_type, enabled, feature_uuid, namespace, feature_name) 
 WITH CLUSTERING ORDER BY (enabled ASC, feature_uuid ASC, namespace ASC, 
feature_name ASC) 
 AND bloom_filter_fp_chance = 0.01 
 AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
 AND comment = '' 
 AND compaction = \{'class': 
'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
'max_threshold': '32', 'min_threshold': '4'} 
 AND compression = \{'chunk_length_in_kb': '64', 'class': 
'org.apache.cassandra.io.compress.LZ4Compressor'} 
 AND crc_check_chance = 1.0 
 AND dclocal_read_repair_chance = 0.1 
 AND default_time_to_live = 0 
 AND gc_grace_seconds = 864000 
 AND max_index_interval = 2048 
 AND memtable_flush_period_in_ms = 0 
 AND min_index_interval = 128 
 AND read_repair_chance = 0.0 
 AND speculative_retry = '99PERCENTILE'; 
 
 
INSERT INTO test (feature_uuid, namespace, feature_name, allocation_type, 
description, enabled, expiration_dt, last_modified_dt, last_modified_user, 
persist_allocations,rule) VALUES 
(uuid(),'Service','NEW','preallocation','20newproduct',TRUE,'2019-10-02 
05:05:05 -0500','2018-08-03 06:06:06 -0500','swapnil',TRUE,'NEW'); 
UPDATE test SET enabled=FALSE WHERE 
feature_uuid=b2d5c245-e30e-4ea8-8609-d36b627dbb2a and namespace='Service' and 
feature_name='NEW' and allocation_type='preallocation' IF EXISTS ; 
Delete from test where feature_uuid=98e6ebcc-cafd-4889-bf3d-774a746a3298;

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14441) Materialized view is not deleting/updating data when made changes in base table

2018-05-08 Thread SWAPNIL BALWANT BHISEY (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SWAPNIL BALWANT BHISEY updated CASSANDRA-14441:
---
Priority: Minor  (was: Major)

> Materialized view is not deleting/updating data when made changes in base 
> table
> ---
>
> Key: CASSANDRA-14441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: SWAPNIL BALWANT BHISEY
>Priority: Minor
> Fix For: 3.11.x
>
>
> we have seen issue in mat view for 3.11.1 where mat view
> 1) we have inserted a row in test table and the same recored is in test_mat 
> table, with Enabled = true,
> 2) when I update the same record with Enabled = False, a new row is created 
> in test_mat table(one with true and one with false) but in test table 
> original record got updated to FALSE.
> 3) when I delete the record using Feature UUID then only the record with 
> Fales is getting deleted in both the tables. however I can see the TRUE 
> record in test_mat table.
> Issue is not reproducible in 3.11.2
> Steps
> CREATE TABLE test ( 
>  feature_uuid uuid, 
>  namespace text, 
>  feature_name text, 
>  allocation_type text, 
>  description text, 
>  enabled boolean, 
>  expiration_dt timestamp, 
>  last_modified_dt timestamp, 
>  last_modified_user text, 
>  persist_allocations boolean, 
>  rule text, 
>  PRIMARY KEY (feature_uuid, namespace, feature_name, allocation_type) 
> ) WITH CLUSTERING ORDER BY (namespace ASC, feature_name ASC, allocation_type 
> ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.3 
>  AND default_time_to_live = 63072000 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.3 
>  AND speculative_retry = '99PERCENTILE'; 
>  
> CREATE MATERIALIZED VIEW test_mat AS 
>  SELECT allocation_type, enabled, feature_uuid, namespace, feature_name, 
> last_modified_dt, last_modified_user, persist_allocations, rule 
>  FROM test
>  WHERE feature_uuid IS NOT NULL AND allocation_type IS NOT NULL AND namespace 
> IS NOT NULL AND feature_name IS NOT NULL AND enabled IS NOT NULL 
>  PRIMARY KEY (allocation_type, enabled, feature_uuid, namespace, 
> feature_name) 
>  WITH CLUSTERING ORDER BY (enabled ASC, feature_uuid ASC, namespace ASC, 
> feature_name ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.1 
>  AND default_time_to_live = 0 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.0 
>  AND speculative_retry = '99PERCENTILE'; 
>  
>  
> INSERT INTO test (feature_uuid, namespace, feature_name, allocation_type, 
> description, enabled, expiration_dt, last_modified_dt, last_modified_user, 
> persist_allocations,rule) VALUES 
> (uuid(),'Service','NEW','preallocation','20newproduct',TRUE,'2019-10-02 
> 05:05:05 -0500','2018-08-03 06:06:06 -0500','swapnil',TRUE,'NEW'); 
> UPDATE test SET enabled=FALSE WHERE 
> feature_uuid=b2d5c245-e30e-4ea8-8609-d36b627dbb2a and namespace='Service' and 
> feature_name='NEW' and allocation_type='preallocation' IF EXISTS ; 
> Delete from test where feature_uuid=98e6ebcc-cafd-4889-bf3d-774a746a3298;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-12793) invalid jvm type and architecture [cassandra-env.sh]

2018-05-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck reassigned CASSANDRA-12793:
---

Assignee: Stefan Podkowinski

> invalid jvm type and architecture [cassandra-env.sh]
> 
>
> Key: CASSANDRA-12793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: ubuntu 16.04, openjdk 1.8.0_91
>Reporter: Ali Ebrahiminejad
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: 12793.patch
>
>
> In cassandra-env.sh the part that determines the type of JVM we'll be running 
> on doesn't provide the right answer for openjdk 1.8.0_91.
> value of java_ver_output is "openjdk version "1.8.0_91" OpenJDK Runtime 
> Environment (build 1.8.0_91-8u91-b14-3ubuntu1~16.04.1-b14) OpenJDK 64-Bit 
> Server VM (build 25.91-b14, mixed mode)", yet the command looks for "java 
> version" (jvm=`echo "$java_ver_output" | grep -A 1 'java version' ...) which 
> does not exist.
> I guess it should be replaced with jvm=`echo "$java_ver_output" | grep -A 1 
> '[openjdk|java] version' | awk 'NR==2 {print $1}'`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12271) NonSystemKeyspaces jmx attribute needs to return jre list

2018-05-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466855#comment-16466855
 ] 

mck edited comment on CASSANDRA-12271 at 5/9/18 12:09 AM:
--

|| Branch || uTest || aTest || dTest ||
|[cassandra-3.11_12271|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546]
 |
|[trunk_12271|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547]
 |


was (Author: michaelsembwever):
|| Branch || uTest || aTest || dTest ||
|[cassandra-3.11_12271|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546]
 |
|[trunk_12244|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547]
 |

> NonSystemKeyspaces jmx attribute needs to return jre list
> -
>
> Key: CASSANDRA-12271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12271
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: Edward Ribeiro
>Priority: Major
>  Labels: lhf
> Fix For: 4.0, 3.11.3
>
> Attachments: CASSANDRA-12271.patch, screenshot-1.png, screenshot-2.png
>
>
> If you dont have right guava in classpath you cant query the 
> NonSystemKeyspaces attribute. i.e. jconsole. can reproduce using Swiss java 
> knife:
> {code}
> # java -jar sjk.jar mx -s localhost:7199 -mg -b 
> "org.apache.cassandra.db:type=StorageService" -f NonSystemKeyspaces
> org.apache.cassandra.db:type=StorageService
> java.rmi.UnmarshalException: error unmarshalling return; nested exception is: 
>   java.lang.ClassNotFoundException: 
> com.google.common.collect.ImmutableList$SerializedForm (no security manager: 
> RMI class loader disabled)
> {code}
> If return a ArrayList or LinkedList or anything in JRE this will be fixed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12793) invalid jvm type and architecture [cassandra-env.sh]

2018-05-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468149#comment-16468149
 ] 

mck commented on CASSANDRA-12793:
-

||Branch||uTest||dTest||
|[cassandra-3.11_12793|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12793]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12793.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12793]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/549]|
|[trunk_12793|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12793]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12793.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12793]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/550]|

(skipped aTest column, the java based tests really don't mean much here)

> invalid jvm type and architecture [cassandra-env.sh]
> 
>
> Key: CASSANDRA-12793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: ubuntu 16.04, openjdk 1.8.0_91
>Reporter: Ali Ebrahiminejad
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: 12793.patch
>
>
> In cassandra-env.sh the part that determines the type of JVM we'll be running 
> on doesn't provide the right answer for openjdk 1.8.0_91.
> value of java_ver_output is "openjdk version "1.8.0_91" OpenJDK Runtime 
> Environment (build 1.8.0_91-8u91-b14-3ubuntu1~16.04.1-b14) OpenJDK 64-Bit 
> Server VM (build 25.91-b14, mixed mode)", yet the command looks for "java 
> version" (jvm=`echo "$java_ver_output" | grep -A 1 'java version' ...) which 
> does not exist.
> I guess it should be replaced with jvm=`echo "$java_ver_output" | grep -A 1 
> '[openjdk|java] version' | awk 'NR==2 {print $1}'`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12793) invalid jvm type and architecture [cassandra-env.sh]

2018-05-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-12793:

Reviewer: mck

> invalid jvm type and architecture [cassandra-env.sh]
> 
>
> Key: CASSANDRA-12793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: ubuntu 16.04, openjdk 1.8.0_91
>Reporter: Ali Ebrahiminejad
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: 12793.patch
>
>
> In cassandra-env.sh the part that determines the type of JVM we'll be running 
> on doesn't provide the right answer for openjdk 1.8.0_91.
> value of java_ver_output is "openjdk version "1.8.0_91" OpenJDK Runtime 
> Environment (build 1.8.0_91-8u91-b14-3ubuntu1~16.04.1-b14) OpenJDK 64-Bit 
> Server VM (build 25.91-b14, mixed mode)", yet the command looks for "java 
> version" (jvm=`echo "$java_ver_output" | grep -A 1 'java version' ...) which 
> does not exist.
> I guess it should be replaced with jvm=`echo "$java_ver_output" | grep -A 1 
> '[openjdk|java] version' | awk 'NR==2 {print $1}'`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12793) invalid jvm type and architecture [cassandra-env.sh]

2018-05-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468149#comment-16468149
 ] 

mck edited comment on CASSANDRA-12793 at 5/9/18 12:22 AM:
--

||Branch||uTest||dTest||
|[cassandra-3.11_12793|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12793]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12793.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12793]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/549/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/549]|
|[trunk_12793|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12793]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12793.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12793]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/550/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/550]|

(skipped aTest column, the java based tests really don't mean much here)


was (Author: michaelsembwever):
||Branch||uTest||dTest||
|[cassandra-3.11_12793|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12793]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12793.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12793]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/549]|
|[trunk_12793|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12793]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12793.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12793]|[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/550]|

(skipped aTest column, the java based tests really don't mean much here)

> invalid jvm type and architecture [cassandra-env.sh]
> 
>
> Key: CASSANDRA-12793
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12793
> Project: Cassandra
>  Issue Type: Bug
>  Components: Configuration
> Environment: ubuntu 16.04, openjdk 1.8.0_91
>Reporter: Ali Ebrahiminejad
>Assignee: Stefan Podkowinski
>Priority: Minor
> Fix For: 3.11.x
>
> Attachments: 12793.patch
>
>
> In cassandra-env.sh the part that determines the type of JVM we'll be running 
> on doesn't provide the right answer for openjdk 1.8.0_91.
> value of java_ver_output is "openjdk version "1.8.0_91" OpenJDK Runtime 
> Environment (build 1.8.0_91-8u91-b14-3ubuntu1~16.04.1-b14) OpenJDK 64-Bit 
> Server VM (build 25.91-b14, mixed mode)", yet the command looks for "java 
> version" (jvm=`echo "$java_ver_output" | grep -A 1 'java version' ...) which 
> does not exist.
> I guess it should be replaced with jvm=`echo "$java_ver_output" | grep -A 1 
> '[openjdk|java] version' | awk 'NR==2 {print $1}'`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468170#comment-16468170
 ] 

Patrick Bannister commented on CASSANDRA-14298:
---

The encoding problem is environmental. Python subprocesses uses whatever is 
returned by locale.getpreferredencoding(). We can do this on a Debian based 
platform (such as Ubuntu) by setting LC_CTYPE='C.UTF-8'.

We could simply say that this environment setting is a prerequisite for the 
cqlsh_tests, but I think we can do one better:

 
{code:java}
class TestCqlsh(Tester):

    @classmethod
    def setUpClass(cls):
    cls._cached_driver_methods = monkeypatch_driver()
    os.environ['LC_CTYPE'] = 'C.UTF-8' # override environment locale 
setting to prefer UTF-8 encoding
    @classmethod
    def tearDownClass(cls):
    unmonkeypatch_driver(cls._cached_driver_methods)


    def setUp(self):
    self.cluster.set_environment_variable('LC_CTYPE', 'C.UTF-8') # the 
cluster is already configured, so we have to override its environment locale too
{code}
 

I have to admit I don't know what this will do on Windows, but I've tested it 
on my Ubuntu environment and it works fine.

I'll issue a new patch with these additions. I'll also re-post the current 
patch as the "old" patch, if you'd prefer to stick with it and just declare 
that this environment variable setting is a prerequisite to running these tests.

 

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298.txt, CASSANDRA-14298_old.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468170#comment-16468170
 ] 

Patrick Bannister edited comment on CASSANDRA-14298 at 5/9/18 12:41 AM:


The encoding problem is environmental. Python subprocesses uses whatever is 
returned by locale.getpreferredencoding(). We can do this on a Debian based 
platform (such as Ubuntu) by setting LC_CTYPE='C.UTF-8'.

We could simply say that this environment setting is a prerequisite for the 
cqlsh_tests, but I think we can do one better:

 
{code:java}
class TestCqlsh(Tester):

    @classmethod
    def setUpClass(cls):
    cls._cached_driver_methods = monkeypatch_driver()
    os.environ['LC_CTYPE'] = 'C.UTF-8' # override environment locale 
setting to prefer UTF-8 encoding
    @classmethod
    def tearDownClass(cls):
    unmonkeypatch_driver(cls._cached_driver_methods)

{code}
 

I have to admit I don't know what this will do on Windows, but I've tested it 
on my Ubuntu environment and it works fine.

I'll issue a new patch with these additions. I'll also re-post the current 
patch as the "old" patch, if you'd prefer to stick with it and just declare 
that this environment variable setting is a prerequisite to running these tests.

 


was (Author: ptbannister):
The encoding problem is environmental. Python subprocesses uses whatever is 
returned by locale.getpreferredencoding(). We can do this on a Debian based 
platform (such as Ubuntu) by setting LC_CTYPE='C.UTF-8'.

We could simply say that this environment setting is a prerequisite for the 
cqlsh_tests, but I think we can do one better:

 
{code:java}
class TestCqlsh(Tester):

    @classmethod
    def setUpClass(cls):
    cls._cached_driver_methods = monkeypatch_driver()
    os.environ['LC_CTYPE'] = 'C.UTF-8' # override environment locale 
setting to prefer UTF-8 encoding
    @classmethod
    def tearDownClass(cls):
    unmonkeypatch_driver(cls._cached_driver_methods)


    def setUp(self):
    self.cluster.set_environment_variable('LC_CTYPE', 'C.UTF-8') # the 
cluster is already configured, so we have to override its environment locale too
{code}
 

I have to admit I don't know what this will do on Windows, but I've tested it 
on my Ubuntu environment and it works fine.

I'll issue a new patch with these additions. I'll also re-post the current 
patch as the "old" patch, if you'd prefer to stick with it and just declare 
that this environment variable setting is a prerequisite to running these tests.

 

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298.txt, CASSANDRA-14298_old.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468170#comment-16468170
 ] 

Patrick Bannister edited comment on CASSANDRA-14298 at 5/9/18 12:41 AM:


The encoding problem is environmental. Python subprocesses uses whatever is 
returned by locale.getpreferredencoding(). We can do this on a Debian based 
platform (such as Ubuntu) by setting LC_CTYPE='C.UTF-8'.

We could simply say that this environment setting is a prerequisite for the 
cqlsh_tests, but I think we can do one better:

 
{code:java}
class TestCqlsh(Tester):

    @classmethod
    def setUpClass(cls):
    cls._cached_driver_methods = monkeypatch_driver()
    os.environ['LC_CTYPE'] = 'C.UTF-8' # override environment locale 
setting to prefer UTF-8 encoding

{code}
 

I have to admit I don't know what this will do on Windows, but I've tested it 
on my Ubuntu environment and it works fine.

I'll issue a new patch with these additions. I'll also re-post the current 
patch as the "old" patch, if you'd prefer to stick with it and just declare 
that this environment variable setting is a prerequisite to running these tests.

 


was (Author: ptbannister):
The encoding problem is environmental. Python subprocesses uses whatever is 
returned by locale.getpreferredencoding(). We can do this on a Debian based 
platform (such as Ubuntu) by setting LC_CTYPE='C.UTF-8'.

We could simply say that this environment setting is a prerequisite for the 
cqlsh_tests, but I think we can do one better:

 
{code:java}
class TestCqlsh(Tester):

    @classmethod
    def setUpClass(cls):
    cls._cached_driver_methods = monkeypatch_driver()
    os.environ['LC_CTYPE'] = 'C.UTF-8' # override environment locale 
setting to prefer UTF-8 encoding
    @classmethod
    def tearDownClass(cls):
    unmonkeypatch_driver(cls._cached_driver_methods)

{code}
 

I have to admit I don't know what this will do on Windows, but I've tested it 
on my Ubuntu environment and it works fine.

I'll issue a new patch with these additions. I'll also re-post the current 
patch as the "old" patch, if you'd prefer to stick with it and just declare 
that this environment variable setting is a prerequisite to running these tests.

 

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298.txt, CASSANDRA-14298_old.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister updated CASSANDRA-14298:
--
Attachment: (was: CASSANDRA-14298.txt)

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister updated CASSANDRA-14298:
--
Attachment: (was: CASSANDRA-14298_old.txt)

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister updated CASSANDRA-14298:
--
Status: Open  (was: Patch Available)

Running some final checks before posting updated patch.

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14365) Commit log replay failure for static columns with collections in clustering keys

2018-05-08 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14365:
-
Status: Patch Available  (was: Open)

> Commit log replay failure for static columns with collections in clustering 
> keys
> 
>
> Key: CASSANDRA-14365
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14365
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Vincent White
>Assignee: Vincent White
>Priority: Major
>
> In the old storage engine, static cells with a collection as part of the 
> clustering key fail to validate because a 0 byte collection (like in the cell 
> name of a static cell) isn't valid.
> To reproduce:
> 1.
> {code:java}
> CREATE TABLE test.x (
> id int,
> id2 frozen>,
> st int static,
> PRIMARY KEY (id, id2)
> );
> INSERT INTO test.x (id, st) VALUES (1, 2);
> {code}
> 2.
>  Kill the cassandra process
> 3.
>  Restart cassandra to replay the commitlog
> Outcome:
> {noformat}
> ERROR [main] 2018-04-05 04:58:23,741 JVMStabilityInspector.java:99 - Exiting 
> due to error while processing commit log during initialization.
> org.apache.cassandra.db.commitlog.CommitLogReplayer$CommitLogReplayException: 
> Unexpected error deserializing mutation; saved to 
> /tmp/mutation3825739904516830950dat.  This may be caused by replaying a 
> mutation against a table with the same name but incompatible schema.  
> Exception follows: org.apache.cassandra.serializers.MarshalException: Not 
> enough bytes to read a set
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.handleReplayError(CommitLogReplayer.java:638)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayMutation(CommitLogReplayer.java:565)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replaySyncSection(CommitLogReplayer.java:517)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:397)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:143)
>  [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:181) 
> [main/:na]
> at 
> org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:161) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:284) 
> [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:533)
>  [main/:na]
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:642) 
> [main/:na]
> {noformat}
> I haven't investigated if there are other more subtle issues caused by these 
> cells failing to validate other places in the code, but I believe the fix for 
> this is to check for 0 byte length collections and accept them as valid as we 
> do with other types.
> I haven't had a chance for any extensive testing but this naive patch seems 
> to have the desired affect. [2.2 PoC 
> Patch|https://github.com/vincewhite/cassandra/commits/zero_length_collection]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14385) Fix Some Potential NPE

2018-05-08 Thread lujie (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468207#comment-16468207
 ] 

lujie commented on CASSANDRA-14385:
---

Ping>

Hope some review

> Fix Some Potential NPE 
> ---
>
> Key: CASSANDRA-14385
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14385
> Project: Cassandra
>  Issue Type: Bug
>Reporter: lujie
>Priority: Major
> Attachments: CA-14385_1.patch
>
>
> We have developed a static analysis tool 
> [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential 
> NPE. Our analysis shows that some callees may return null in corner case(e.g. 
> node crash , IO exception), some of their callers have  _!=null_ check but 
> some do not have. In this issue we post a patch which can add  !=null  based 
> on existed !=null  check. For example:
> Calle Schema#getView may return null:
> {code:java}
> public ViewMetadata getView(String keyspaceName, String viewName)
> {
> assert keyspaceName != null;
> KeyspaceMetadata ksm = keyspaces.getNullable(keyspaceName);
> return (ksm == null) ? null : ksm.views.getNullable(viewName);//may 
> return null
> }
> {code}
>  it have 4 callers, 3 of them have !=null check, like its caller 
> MigrationManager#announceViewDrop have !=null check()
> {code:java}
> public static void announceViewDrop(String ksName, String viewName, boolean 
> announceLocally) throws ConfigurationException
> {
>ViewMetadata view = Schema.instance.getView(ksName, viewName);
> if (view == null)//null pointer checker
> throw new ConfigurationException(String.format("Cannot drop non 
> existing materialized view '%s' in keyspace '%s'.", viewName, ksName));
>KeyspaceMetadata ksm = Schema.instance.getKeyspaceMetadata(ksName);
>logger.info("Drop table '{}/{}'", view.keyspace, view.name);
>announce(SchemaKeyspace.makeDropViewMutation(ksm, view, 
> FBUtilities.timestampMicros()), announceLocally);
> }
> {code}
> but caller MigrationManager#announceMigration does not have 
> We add !=null check based on MigrationManager#announceViewDrop:
> {code:java}
> if (current == null)
> throw new InvalidRequestException("There is no materialized view in 
> keyspace " + keyspace());
> {code}
> But due to we are not very  familiar with CASSANDRA, hope some expert can 
> review it.
> Thanks
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity

2018-05-08 Thread Vinay Chella (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468212#comment-16468212
 ] 

Vinay Chella commented on CASSANDRA-12151:
--

Thanks, [~jasobrown] for cleanup and fixing [~eperott] comments.
{quote}When using logback as backend, would it make sense to mark audit records 
with a specific appender name such as "AUDIT" rather than 
"FileAuditLoggerAppender". That way we can easily tell regular log messages 
from audit log messages.
{quote}
Yes, certainly. However, AuditLog feature does not ship with appender 
configurations. I see that "FileAuditLoggerAppender" is being referenced in the 
documentation, have updated and pushed.
{quote}On a similar topic, rather than creating the AuditLogEntryCategory type, 
the mapping in AuditLogEntryType and the kespace/scope of (I)AuditLogContext, 
would it make sense to use the existing Permission type (SELECT, MODIFY, 
CREATE...) and IResource (Data, Role, Function...). We could create a new 
resource type to represent Connections (like connection/native, 
connection/thrift, connection/jmx) which could be used for managing white-lists 
for authentication.
{quote}
I don't think it is a good idea to piggyback on Permission type and IResource 
to get the AuditLogType, that makes those 2 features tightly bound and it seems 
like a hack rather than cleaner implementation. Also, binding them tightly 
makes future extensions on those features tough to manage and we end up 
separating eventually. So not sure, if that is a good idea to piggyback on 2 
other different features to get the AuditLog needs. 
\\
{quote}Sure, I understand we seek to close this ticket. I'm just a bit 
concerned with the timing. If this ticket is merged as is and we take a cut for 
4.0, then I assume we will have to stick to this way of configure audit logs 
for some time.
{quote}
CQL grammar for managing audit log configurations is an interesting idea, 
considering the changes needed at this point, hierarchical and composite 
requirements that come with it, I agree with @Jason on exploring as a followup. 
Please feel free to create followup JIRA on this.

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10789) Allow DBAs to kill individual client sessions without bouncing JVM

2018-05-08 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468217#comment-16468217
 ] 

Kurt Greaves commented on CASSANDRA-10789:
--

bq. While I am underwhelmed by how this is supposed to work I don't think it 
not be persistent and or queryable introduces any technical debt. It does mean 
that if we release 4.0 without it then 4.0 will never have it which is kind of 
a pain. It seems like we are pushing the complexity of maintaining these lists 
to operational tools outside the database which are not open source or shipped 
with the database.

The use case here is for operator's diagnosing client issues and cluster 
recovery. We're not blacklisting for extended periods, it's simply to help stop 
bad clients from killing the cluster. Persistent blacklists can easily be 
achieved with firewall rules, and I'd say it's not necesssary to manage a 
blacklist in C* at all (extra complexity for a feature that already exists in 
all operating systems). This is simply a temporary measure that operators can 
use when diagnosing bad clients, and give them some time to communicate the 
problem before it breaks things. I'm sure it'd also help with client testing as 
well.

I think persistence is just over-complicating the problem. You'd _always_ use a 
firewall for any form of persistent blacklist based on IP.

> Allow DBAs to kill individual client sessions without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister updated CASSANDRA-14298:
--
Attachment: CASSANDRA-14298-old.txt

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298-old.txt, CASSANDRA-14298.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister updated CASSANDRA-14298:
--
Attachment: CASSANDRA-14298.txt

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298-old.txt, CASSANDRA-14298.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14298) cqlshlib tests broken on b.a.o

2018-05-08 Thread Patrick Bannister (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Bannister updated CASSANDRA-14298:
--
Status: Patch Available  (was: In Progress)

Posted a new patch that includes test fixture setup in 
cqlsh_tests/cqlsh_tests.py to set environment variable LC_CTYPE='C.UTF-8'.

> cqlshlib tests broken on b.a.o
> --
>
> Key: CASSANDRA-14298
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14298
> Project: Cassandra
>  Issue Type: Bug
>  Components: Build, Testing
>Reporter: Stefan Podkowinski
>Assignee: Patrick Bannister
>Priority: Major
>  Labels: cqlsh, dtest
> Attachments: CASSANDRA-14298-old.txt, CASSANDRA-14298.txt, 
> cqlsh_tests_notes.md
>
>
> It appears that cqlsh-tests on builds.apache.org on all branches stopped 
> working since we removed nosetests from the system environment. See e.g. 
> [here|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-trunk-cqlsh-tests/458/cython=no,jdk=JDK%201.8%20(latest),label=cassandra/console].
>  Looks like we either have to make nosetests available again or migrate to 
> pytest as we did with dtests. Giving pytest a quick try resulted in many 
> errors locally, but I haven't inspected them in detail yet. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM

2018-05-08 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-10789:
-
Summary: Allow DBAs to kill individual client sessions from certain IP(s) 
and temporarily block subsequent connections without bouncing JVM  (was: Allow 
DBAs to kill individual client sessions without bouncing JVM)

> Allow DBAs to kill individual client sessions from certain IP(s) and 
> temporarily block subsequent connections without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM

2018-05-08 Thread Wei Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Deng updated CASSANDRA-10789:
-
Description: 
In production, there could be hundreds of clients connected to a Cassandra 
cluster (maybe even from different applications), and if they use DataStax Java 
Driver, each client will establish at least one TCP connection to a Cassandra 
server (see https://datastax.github.io/java-driver/2.1.9/features/pooling/). 
This is all normal and at any given time, you can indeed see hundreds of 
ESTABLISHED connections to port 9042 on a C* server (from netstat -na). The 
problem is that sometimes when a C* cluster is under heavy load, when the DBA 
identifies some client session that sends abusive amount of traffic to the C* 
server and would like to stop it, they would like a lightweight approach rather 
than shutting down the JVM or rolling restart the whole cluster to kill all 
hundreds of connections in order to kill a single client session. If the DBA 
had root privilege, they would have been able to do something at the OS network 
level to achieve the same goal but oftentimes enterprise DBA role is separate 
from OS sysadmin role, so the DBAs usually don't have that privilege.

This is especially helpful when you have a multi-tenant C* cluster and you want 
to have the impact for handling such client to be minimal to the other 
applications. This feature (killing individual session) seems to be a common 
feature in other databases (regardless of whether the client has some reconnect 
logic or not). It could be implemented as a JMX MBean method and exposed 
through nodetool to the DBAs.

Note due to CQL driver's automated reconnection, simply killing the currently 
connected client session will not work well, so the JMX parameter should be an 
IP address or a list of IP addresses, so that the Cassandra server can 
terminate existing connection with that IP, and block future connection 
attempts from that IP for the remaining time until the JVM is restarted.

  was:
In production, there could be hundreds of clients connected to a Cassandra 
cluster (maybe even from different applications), and if they use DataStax Java 
Driver, each client will establish at least one TCP connection to a Cassandra 
server (see https://datastax.github.io/java-driver/2.1.9/features/pooling/). 
This is all normal and at any given time, you can indeed see hundreds of 
ESTABLISHED connections to port 9042 on a C* server (from netstat -na). The 
problem is that sometimes when a C* cluster is under heavy load, when the DBA 
identifies some client session that sends abusive amount of traffic to the C* 
server and would like to stop it, they would like a lightweight approach rather 
than shutting down the JVM or rolling restart the whole cluster to kill all 
hundreds of connections in order to kill a single client session. If the DBA 
had root privilege, they would have been able to do something at the OS network 
level to achieve the same goal but oftentimes enterprise DBA role is separate 
from OS sysadmin role, so the DBAs usually don't have that privilege.

This is especially helpful when you have a multi-tenant C* cluster and you want 
to have the impact for handling such client to be minimal to the other 
applications. This feature (killing individual session) seems to be a common 
feature in other databases (regardless of whether the client has some reconnect 
logic or not). It could be implemented as a JMX MBean method and exposed 
through nodetool to the DBAs.


> Allow DBAs to kill individual client sessions from certain IP(s) and 
> temporarily block subsequent connections without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like

[jira] [Updated] (CASSANDRA-13357) A possible NPE in nodetool getendpoints

2018-05-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-13357:

Summary: A possible NPE in nodetool getendpoints  (was: A possible NPE)

> A possible NPE in nodetool getendpoints
> ---
>
> Key: CASSANDRA-13357
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13357
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tools
>Reporter: Hao Zhong
>Assignee: Hao Zhong
>Priority: Major
> Fix For: 4.x
>
> Attachments: cassandra.patch
>
>
> The GetEndpoints.execute method has the following code:
> {code:title=GetEndpoints.java|borderStyle=solid}
>List endpoints = probe.getEndpoints(ks, table, key);
> for (InetAddress endpoint : endpoints)
> {
> System.out.println(endpoint.getHostAddress());
> }
> {code}
> This code can throw NPE. A similar bug is fixed in CASSANDRA-8950. The buggy 
> code  is 
> {code:title=NodeCmd.java|borderStyle=solid}
>   List endpoints = this.probe.getEndpoints(keySpace, cf, key);
> for (InetAddress anEndpoint : endpoints)
> {
>output.println(anEndpoint.getHostAddress());
> }
> {code}
> The fixed code is:
> {code:title=NodeCmd.java|borderStyle=solid}
> try
> {
> List endpoints = probe.getEndpoints(keySpace, cf, 
> key);
> for (InetAddress anEndpoint : endpoints)
>output.println(anEndpoint.getHostAddress());
> }
> catch (IllegalArgumentException ex)
> {
> output.println(ex.getMessage());
> probe.failed();
> }
> {code}
> The GetEndpoints.execute method shall be modified as CASSANDRA-8950 does.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2018-05-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-12924:

   Resolution: Duplicate
Fix Version/s: 4.0
   Status: Resolved  (was: Patch Available)

Got fixed in CASSANDRA-13648

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Major
> Fix For: 4.0
>
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar

[jira] [Commented] (CASSANDRA-14415) Performance regression in queries for distinct keys

2018-05-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468256#comment-16468256
 ] 

Jeff Jirsa commented on CASSANDRA-14415:


Is this patch useful in 3.0.x branch without the fix from CASSANDRA-10657 ? 



> Performance regression in queries for distinct keys
> ---
>
> Key: CASSANDRA-14415
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14415
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Core
>Reporter: Samuel Klock
>Assignee: Samuel Klock
>Priority: Major
>  Labels: performance
> Fix For: 3.0.x, 3.11.x, 4.x
>
>
> Running Cassandra 3.0.16, we observed a major performance regression 
> affecting \{{SELECT DISTINCT keys}}-style queries against certain tables.  
> Based on some investigation (guided by some helpful feedback from Benjamin on 
> the dev list), we tracked the regression down to two problems.
> * One is that Cassandra was reading more data from disk than was necessary to 
> satisfy the query.  This was fixed under CASSANDRA-10657 in a later 3.x 
> release.
> * If the fix for CASSANDRA-10657 is incorporated, the other is this code 
> snippet in \{{RebufferingInputStream}}:
> {code:java}
>     @Override
>     public int skipBytes(int n) throws IOException
>     {
>     if (n < 0)
>     return 0;
>     int requested = n;
>     int position = buffer.position(), limit = buffer.limit(), remaining;
>     while ((remaining = limit - position) < n)
>     {
>     n -= remaining;
>     buffer.position(limit);
>     reBuffer();
>     position = buffer.position();
>     limit = buffer.limit();
>     if (position == limit)
>     return requested - n;
>     }
>     buffer.position(position + n);
>     return requested;
>     }
> {code}
> The gist of it is that to skip bytes, the stream needs to read those bytes 
> into memory then throw them away.  In our tests, we were spending a lot of 
> time in this method, so it looked like the chief drag on performance.
> We noticed that the subclass of \{{RebufferingInputStream}} in use for our 
> queries, \{{RandomAccessReader}} (over compressed sstables), implements a 
> \{{seek()}} method.  Overriding \{{skipBytes()}} in it to use \{{seek()}} 
> instead was sufficient to fix the performance regression.
> The performance difference is significant for tables with large values.  It's 
> straightforward to evaluate with very simple key-value tables, e.g.:
> {\{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}}
> We did some basic experimentation with the following variations (all in a 
> single-node 3.11.2 cluster with off-the-shelf settings running on a dev 
> workstation):
> * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, 10,000 
> entries), and much larger values (1 MB, 10,000 entries);
> * compressible data (a single byte repeated) and uncompressible data (output 
> from \{{openssl rand $bytes}}); and
> * with and without sstable compression.  (With compression, we use 
> Cassandra's defaults.)
> The difference is most conspicuous for tables with large, uncompressible data 
> and sstable decompression (which happens to describe the use case that 
> triggered our investigation).  It is smaller but still readily apparent for 
> tables with effective compression.  For uncompressible data without 
> compression enabled, there is no appreciable difference.
> Here's what the performance looks like without our patch for the 1-MB entries 
> (times in seconds, five consecutive runs for each data set, all exhausting 
> the results from a \{{SELECT DISTINCT key FROM ...}} query with a page size 
> of 24):
> {noformat}
> working on compressible
> 5.21180510521
> 5.10270500183
> 5.22311806679
> 4.6732840538
> 4.84219098091
> working on uncompressible_uncompressed
> 55.0423607826
> 0.769015073776
> 0.850513935089
> 0.713396072388
> 0.62596988678
> working on uncompressible
> 413.292617083
> 231.345913887
> 449.524993896
> 425.135111094
> 243.469946861
> {noformat}
> and without the fix:
> {noformat}
> working on compressible
> 2.86733293533
> 1.24895811081
> 1.108907938
> 1.12742400169
> 1.04647302628
> working on uncompressible_uncompressed
> 56.4146180153
> 0.895509958267
> 0.922824144363
> 0.772884130478
> 0.731923818588
> working on uncompressible
> 64.4587619305
> 1.81325793266
> 1.52577018738
> 1.41769099236
> 1.60442209244
> {noformat}
> The long initial runs for the uncompressible data presumably come from 
> repeatedly hitting the disk.  In contrast to the runs without the fix, the 
> initial runs seem to be effective at warming the page cache (as lots of data 
> is skipped, so the data that's read can fit in memory), s

[jira] [Reopened] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2018-05-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck reopened CASSANDRA-12924:
-
  Assignee: mck

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Assignee: mck
>Priority: Major
> Fix For: 4.0
>
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.j

[jira] [Commented] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2018-05-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468257#comment-16468257
 ] 

mck commented on CASSANDRA-12924:
-

Re-opening. It makes sense to back-port it to 3.11

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Priority: Major
> Fix For: 4.0
>
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> jav

[jira] [Updated] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2018-05-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-12924:

Status: Patch Available  (was: Reopened)

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Assignee: mck
>Priority: Major
> Fix For: 4.0
>
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableA

[jira] [Commented] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2018-05-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468268#comment-16468268
 ] 

mck commented on CASSANDRA-12924:
-

|| Branch || uTest || aTest || dTest ||
|[cassandra-3.11_12924|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12924]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12924.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12924]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/20/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/20]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/551/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/551]
 |

[~jjirsa], any objections to backporting this? It's a stabilisation issue.

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Assignee: mck
>Priority: Major
> Fix For: 4.0
>
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(Buffe

[jira] [Updated] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2018-05-08 Thread mck (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mck updated CASSANDRA-12924:

Fix Version/s: (was: 4.0)
   3.11.3

> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Assignee: mck
>Priority: Major
> Fix For: 3.11.3
>
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.E

[jira] [Commented] (CASSANDRA-12924) GraphiteReporter does not reconnect if graphite restarts

2018-05-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468272#comment-16468272
 ] 

Jeff Jirsa commented on CASSANDRA-12924:


I don't strongly object, but let's loop in some other people with votes to see 
if they do.

[~iamaleksey] , [~JoshuaMcKenzie] - either of you object to upgrading metrics 
lib in 3.11.x to allow it to reconnect? 


> GraphiteReporter does not reconnect if graphite restarts
> 
>
> Key: CASSANDRA-12924
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12924
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefano Ortolani
>Assignee: mck
>Priority: Major
> Fix For: 3.11.3
>
>
> Seems like GraphiteReporter does not reconnect after graphite is restarted. 
> The consequence is complete loss of reported metrics until Cassandra 
> restarts. Logs show this every minute:
> {noformat}
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:179 - Unable to report to Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.report(ScheduledReporter.java:162) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter$1.run(ScheduledReporter.java:117) 
> [metrics-core-3.1.0.jar:3.1.0]
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [na:1.8.0_91]
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [na:1.8.0_91]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [na:1.8.0_91]
>   at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91]
> WARN  [metrics-graphite-reporter-1-thread-1] 2016-11-17 10:06:26,549 
> GraphiteReporter.java:183 - Error closing Graphite
> java.net.SocketException: Broken pipe
>   at java.net.SocketOutputStream.socketWrite0(Native Method) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109) 
> ~[na:1.8.0_91]
>   at java.net.SocketOutputStream.write(SocketOutputStream.java:153) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) 
> ~[na:1.8.0_91]
>   at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) ~[na:1.8.0_91]
>   at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:129) 
> ~[na:1.8.0_91]
>   at java.io.BufferedWriter.write(BufferedWriter.java:230) ~[na:1.8.0_91]
>   at java.io.Writer.write(Writer.java:157) ~[na:1.8.0_91]
>   at com.codahale.metrics.graphite.Graphite.send(Graphite.java:130) 
> ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.reportGauge(GraphiteReporter.java:283)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.graphite.GraphiteReporter.report(GraphiteReporter.java:158)
>  ~[metrics-graphite-3.1.0.jar:3.1.0]
>   at 
> com.codahale.metrics.ScheduledReporter.

[jira] [Commented] (CASSANDRA-14441) Materialized view is not deleting/updating data when made changes in base table

2018-05-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468318#comment-16468318
 ] 

Jeff Jirsa commented on CASSANDRA-14441:


If the issue is in 3.11.1, but not in 3.11.2, isn't the likely answer that this 
is a duplicate of some other issue? Any reason to believe it still exists in 
3.11.3 ? 



> Materialized view is not deleting/updating data when made changes in base 
> table
> ---
>
> Key: CASSANDRA-14441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: SWAPNIL BALWANT BHISEY
>Priority: Minor
> Fix For: 3.11.x
>
>
> we have seen issue in mat view for 3.11.1 where mat view
> 1) we have inserted a row in test table and the same recored is in test_mat 
> table, with Enabled = true,
> 2) when I update the same record with Enabled = False, a new row is created 
> in test_mat table(one with true and one with false) but in test table 
> original record got updated to FALSE.
> 3) when I delete the record using Feature UUID then only the record with 
> Fales is getting deleted in both the tables. however I can see the TRUE 
> record in test_mat table.
> Issue is not reproducible in 3.11.2
> Steps
> CREATE TABLE test ( 
>  feature_uuid uuid, 
>  namespace text, 
>  feature_name text, 
>  allocation_type text, 
>  description text, 
>  enabled boolean, 
>  expiration_dt timestamp, 
>  last_modified_dt timestamp, 
>  last_modified_user text, 
>  persist_allocations boolean, 
>  rule text, 
>  PRIMARY KEY (feature_uuid, namespace, feature_name, allocation_type) 
> ) WITH CLUSTERING ORDER BY (namespace ASC, feature_name ASC, allocation_type 
> ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.3 
>  AND default_time_to_live = 63072000 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.3 
>  AND speculative_retry = '99PERCENTILE'; 
>  
> CREATE MATERIALIZED VIEW test_mat AS 
>  SELECT allocation_type, enabled, feature_uuid, namespace, feature_name, 
> last_modified_dt, last_modified_user, persist_allocations, rule 
>  FROM test
>  WHERE feature_uuid IS NOT NULL AND allocation_type IS NOT NULL AND namespace 
> IS NOT NULL AND feature_name IS NOT NULL AND enabled IS NOT NULL 
>  PRIMARY KEY (allocation_type, enabled, feature_uuid, namespace, 
> feature_name) 
>  WITH CLUSTERING ORDER BY (enabled ASC, feature_uuid ASC, namespace ASC, 
> feature_name ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.1 
>  AND default_time_to_live = 0 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.0 
>  AND speculative_retry = '99PERCENTILE'; 
>  
>  
> INSERT INTO test (feature_uuid, namespace, feature_name, allocation_type, 
> description, enabled, expiration_dt, last_modified_dt, last_modified_user, 
> persist_allocations,rule) VALUES 
> (uuid(),'Service','NEW','preallocation','20newproduct',TRUE,'2019-10-02 
> 05:05:05 -0500','2018-08-03 06:06:06 -0500','swapnil',TRUE,'NEW'); 
> UPDATE test SET enabled=FALSE WHERE 
> feature_uuid=b2d5c245-e30e-4ea8-8609-d36b627dbb2a and namespace='Service' and 
> feature_name='NEW' and allocation_type='preallocation' IF EXISTS ; 
> Delete from test where feature_uuid=98e6ebcc-cafd-4889-bf3d-774a746a3298;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14441) Materialized view is not deleting/updating data when made changes in base table

2018-05-08 Thread SWAPNIL BALWANT BHISEY (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468322#comment-16468322
 ] 

SWAPNIL BALWANT BHISEY commented on CASSANDRA-14441:


Have not tested in 3.11.3, wanted to know cause of issue and if fix can be back 
ported to 3.11.1 

> Materialized view is not deleting/updating data when made changes in base 
> table
> ---
>
> Key: CASSANDRA-14441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: SWAPNIL BALWANT BHISEY
>Priority: Minor
> Fix For: 3.11.x
>
>
> we have seen issue in mat view for 3.11.1 where mat view
> 1) we have inserted a row in test table and the same recored is in test_mat 
> table, with Enabled = true,
> 2) when I update the same record with Enabled = False, a new row is created 
> in test_mat table(one with true and one with false) but in test table 
> original record got updated to FALSE.
> 3) when I delete the record using Feature UUID then only the record with 
> Fales is getting deleted in both the tables. however I can see the TRUE 
> record in test_mat table.
> Issue is not reproducible in 3.11.2
> Steps
> CREATE TABLE test ( 
>  feature_uuid uuid, 
>  namespace text, 
>  feature_name text, 
>  allocation_type text, 
>  description text, 
>  enabled boolean, 
>  expiration_dt timestamp, 
>  last_modified_dt timestamp, 
>  last_modified_user text, 
>  persist_allocations boolean, 
>  rule text, 
>  PRIMARY KEY (feature_uuid, namespace, feature_name, allocation_type) 
> ) WITH CLUSTERING ORDER BY (namespace ASC, feature_name ASC, allocation_type 
> ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.3 
>  AND default_time_to_live = 63072000 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.3 
>  AND speculative_retry = '99PERCENTILE'; 
>  
> CREATE MATERIALIZED VIEW test_mat AS 
>  SELECT allocation_type, enabled, feature_uuid, namespace, feature_name, 
> last_modified_dt, last_modified_user, persist_allocations, rule 
>  FROM test
>  WHERE feature_uuid IS NOT NULL AND allocation_type IS NOT NULL AND namespace 
> IS NOT NULL AND feature_name IS NOT NULL AND enabled IS NOT NULL 
>  PRIMARY KEY (allocation_type, enabled, feature_uuid, namespace, 
> feature_name) 
>  WITH CLUSTERING ORDER BY (enabled ASC, feature_uuid ASC, namespace ASC, 
> feature_name ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.1 
>  AND default_time_to_live = 0 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.0 
>  AND speculative_retry = '99PERCENTILE'; 
>  
>  
> INSERT INTO test (feature_uuid, namespace, feature_name, allocation_type, 
> description, enabled, expiration_dt, last_modified_dt, last_modified_user, 
> persist_allocations,rule) VALUES 
> (uuid(),'Service','NEW','preallocation','20newproduct',TRUE,'2019-10-02 
> 05:05:05 -0500','2018-08-03 06:06:06 -0500','swapnil',TRUE,'NEW'); 
> UPDATE test SET enabled=FALSE WHERE 
> feature_uuid=b2d5c245-e30e-4ea8-8609-d36b627dbb2a and namespace='Service' and 
> feature_name='NEW' and allocation_type='preallocation' IF EXISTS ; 
> Delete from test where feature_uuid=98e6ebcc-cafd-4889-bf3d-774a746a3298;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14441) Materialized view is not deleting/updating data when made changes in base table

2018-05-08 Thread Jeff Jirsa (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468399#comment-16468399
 ] 

Jeff Jirsa commented on CASSANDRA-14441:


We won't backport anything to 3.11.1



> Materialized view is not deleting/updating data when made changes in base 
> table
> ---
>
> Key: CASSANDRA-14441
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14441
> Project: Cassandra
>  Issue Type: Bug
>  Components: Materialized Views
>Reporter: SWAPNIL BALWANT BHISEY
>Priority: Minor
> Fix For: 3.11.x
>
>
> we have seen issue in mat view for 3.11.1 where mat view
> 1) we have inserted a row in test table and the same recored is in test_mat 
> table, with Enabled = true,
> 2) when I update the same record with Enabled = False, a new row is created 
> in test_mat table(one with true and one with false) but in test table 
> original record got updated to FALSE.
> 3) when I delete the record using Feature UUID then only the record with 
> Fales is getting deleted in both the tables. however I can see the TRUE 
> record in test_mat table.
> Issue is not reproducible in 3.11.2
> Steps
> CREATE TABLE test ( 
>  feature_uuid uuid, 
>  namespace text, 
>  feature_name text, 
>  allocation_type text, 
>  description text, 
>  enabled boolean, 
>  expiration_dt timestamp, 
>  last_modified_dt timestamp, 
>  last_modified_user text, 
>  persist_allocations boolean, 
>  rule text, 
>  PRIMARY KEY (feature_uuid, namespace, feature_name, allocation_type) 
> ) WITH CLUSTERING ORDER BY (namespace ASC, feature_name ASC, allocation_type 
> ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.3 
>  AND default_time_to_live = 63072000 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.3 
>  AND speculative_retry = '99PERCENTILE'; 
>  
> CREATE MATERIALIZED VIEW test_mat AS 
>  SELECT allocation_type, enabled, feature_uuid, namespace, feature_name, 
> last_modified_dt, last_modified_user, persist_allocations, rule 
>  FROM test
>  WHERE feature_uuid IS NOT NULL AND allocation_type IS NOT NULL AND namespace 
> IS NOT NULL AND feature_name IS NOT NULL AND enabled IS NOT NULL 
>  PRIMARY KEY (allocation_type, enabled, feature_uuid, namespace, 
> feature_name) 
>  WITH CLUSTERING ORDER BY (enabled ASC, feature_uuid ASC, namespace ASC, 
> feature_name ASC) 
>  AND bloom_filter_fp_chance = 0.01 
>  AND caching = \{'keys': 'ALL', 'rows_per_partition': 'NONE'} 
>  AND comment = '' 
>  AND compaction = \{'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'} 
>  AND compression = \{'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'} 
>  AND crc_check_chance = 1.0 
>  AND dclocal_read_repair_chance = 0.1 
>  AND default_time_to_live = 0 
>  AND gc_grace_seconds = 864000 
>  AND max_index_interval = 2048 
>  AND memtable_flush_period_in_ms = 0 
>  AND min_index_interval = 128 
>  AND read_repair_chance = 0.0 
>  AND speculative_retry = '99PERCENTILE'; 
>  
>  
> INSERT INTO test (feature_uuid, namespace, feature_name, allocation_type, 
> description, enabled, expiration_dt, last_modified_dt, last_modified_user, 
> persist_allocations,rule) VALUES 
> (uuid(),'Service','NEW','preallocation','20newproduct',TRUE,'2019-10-02 
> 05:05:05 -0500','2018-08-03 06:06:06 -0500','swapnil',TRUE,'NEW'); 
> UPDATE test SET enabled=FALSE WHERE 
> feature_uuid=b2d5c245-e30e-4ea8-8609-d36b627dbb2a and namespace='Service' and 
> feature_name='NEW' and allocation_type='preallocation' IF EXISTS ; 
> Delete from test where feature_uuid=98e6ebcc-cafd-4889-bf3d-774a746a3298;
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12271) NonSystemKeyspaces jmx attribute needs to return jre list

2018-05-08 Thread mck (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16466855#comment-16466855
 ] 

mck edited comment on CASSANDRA-12271 at 5/9/18 6:01 AM:
-

|| Branch || uTest || aTest || dTest ||
|[cassandra-3.11_12271|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546]
 |
|[trunk_12271|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/548/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/548]
 |


was (Author: michaelsembwever):
|| Branch || uTest || aTest || dTest ||
|[cassandra-3.11_12271|https://github.com/thelastpickle/cassandra/tree/mck/cassandra-3.11_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Fcassandra-3.11_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/17]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/546]
 |
|[trunk_12271|https://github.com/thelastpickle/cassandra/tree/mck/trunk_12271]|[!https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271.svg?style=svg!|https://circleci.com/gh/thelastpickle/cassandra/tree/mck%2Ftrunk_12271]|
 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-testall/18]
 | 
[!https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547/badge/icon!|https://builds.apache.org/view/A-D/view/Cassandra/job/Cassandra-devbranch-dtest/547]
 |

> NonSystemKeyspaces jmx attribute needs to return jre list
> -
>
> Key: CASSANDRA-12271
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12271
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Chris Lohfink
>Assignee: Edward Ribeiro
>Priority: Major
>  Labels: lhf
> Fix For: 4.0, 3.11.3
>
> Attachments: CASSANDRA-12271.patch, screenshot-1.png, screenshot-2.png
>
>
> If you dont have right guava in classpath you cant query the 
> NonSystemKeyspaces attribute. i.e. jconsole. can reproduce using Swiss java 
> knife:
> {code}
> # java -jar sjk.jar mx -s localhost:7199 -mg -b 
> "org.apache.cassandra.db:type=StorageService" -f NonSystemKeyspaces
> org.apache.cassandra.db:type=StorageService
> java.rmi.UnmarshalException: error unmarshalling return; nested exception is: 
>   java.lang.ClassNotFoundException: 
> com.google.common.collect.ImmutableList$SerializedForm (no security manager: 
> RMI class loader disabled)
> {code}
> If return a ArrayList or LinkedList or anything in JRE this will be fixed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity

2018-05-08 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468457#comment-16468457
 ] 

Jason Brown commented on CASSANDRA-12151:
-

On the whole, we're almost there. I've done a last-pass review, fixed a few 
last dangling items (see below), and pushed a commit up to my branch. Just a 
few last questions:
 - Do we need the -D jvm arg to set {{AuditLogOptions#audit_logs_dir}}? This 
kinda comes out of nowhere and matches nothing else in the audit log config.
 - Operators cannot, in an obvious manner, adjust the following 
{{AuditLogOptions}} options: 
audit_logs_dir/block/max_queue_weight/max_log_size/roll_cycle. They are 
basically hidden yaml options: you can set them, but you have to know they 
exist. I understand these are more advanced options, but we should document 
them either in the yaml or the doc page.

what i've done in latest commit:
 - cleaned up yaml comments, and point users to the audit_log docs; else, it's 
gonna real messy real quick
 - trivial clean ups: NEWS.txt, formatting, white space, comments, removed 
unused methods, fixed test method names in {{AuditLoggerTest}}
 - reworked {{AuditLogFilter#create()}} to eliminate garbage collection of all 
the {{HashSet}} s, which then we pass to {{ImmutableSet#of}}
 - Added path checking in {{AuditLogManager}} to ensure when enabling either 
FQL or audit logging, the path doesn't conflict with the other. This was easy 
enough to do for the {{BinLog}}-related {{IAuditLogger}}, but I have no idea 
how to get the directory or file from logback (when using {{FileAuditLogger}}). 
Since that's a 'spare' implementation, I've punted for now.
 - clarified {{AuditLogEntry#timestamp}} to allow {{Builder}} users to 
explicitly set the value. This allows the FQL paths to be more in tune with 
their original code (where it would get {{System#currentTimeMillis()}} instead 
of using {{queryStartNanoTime}}).

Tests running.

> Audit logging for database activity
> ---
>
> Key: CASSANDRA-12151
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12151
> Project: Cassandra
>  Issue Type: New Feature
>Reporter: stefan setyadi
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.x
>
> Attachments: 12151.txt, CASSANDRA_12151-benchmark.html, 
> DesignProposal_AuditingFeature_ApacheCassandra_v1.docx
>
>
> we would like a way to enable cassandra to log database activity being done 
> on our server.
> It should show username, remote address, timestamp, action type, keyspace, 
> column family, and the query statement.
> it should also be able to log connection attempt and changes to the 
> user/roles.
> I was thinking of making a new keyspace and insert an entry for every 
> activity that occurs.
> Then It would be possible to query for specific activity or a query targeting 
> a specific keyspace and column family.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM

2018-05-08 Thread Damien Stevenson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Stevenson updated CASSANDRA-10789:
-
Attachment: (was: 10789-trunk.txt)

> Allow DBAs to kill individual client sessions from certain IP(s) and 
> temporarily block subsequent connections without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.
> Note due to CQL driver's automated reconnection, simply killing the currently 
> connected client session will not work well, so the JMX parameter should be 
> an IP address or a list of IP addresses, so that the Cassandra server can 
> terminate existing connection with that IP, and block future connection 
> attempts from that IP for the remaining time until the JVM is restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM

2018-05-08 Thread Damien Stevenson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Stevenson updated CASSANDRA-10789:
-
Attachment: 10789-trunk.txt

> Allow DBAs to kill individual client sessions from certain IP(s) and 
> temporarily block subsequent connections without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.
> Note due to CQL driver's automated reconnection, simply killing the currently 
> connected client session will not work well, so the JMX parameter should be 
> an IP address or a list of IP addresses, so that the Cassandra server can 
> terminate existing connection with that IP, and block future connection 
> attempts from that IP for the remaining time until the JVM is restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



cassandra git commit: Automatic sstable upgrades

2018-05-08 Thread marcuse
Repository: cassandra
Updated Branches:
  refs/heads/trunk 6b0247576 -> d14a9266c


Automatic sstable upgrades

Patch by marcuse; reviewed by Ariel Weisberg for CASSANDRA-14197


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d14a9266
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d14a9266
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d14a9266

Branch: refs/heads/trunk
Commit: d14a9266c7ddff0589fdbe7a1836217b8bb8b394
Parents: 6b02475
Author: Marcus Eriksson 
Authored: Thu Mar 15 09:25:23 2018 +0100
Committer: Marcus Eriksson 
Committed: Wed May 9 08:17:33 2018 +0200

--
 CHANGES.txt |   1 +
 NEWS.txt|   3 +
 conf/cassandra.yaml |   6 +
 .../org/apache/cassandra/config/Config.java |   2 +
 .../cassandra/config/DatabaseDescriptor.java|  35 +
 .../db/compaction/CompactionManager.java|  76 ++-
 .../db/compaction/CompactionManagerMBean.java   |  21 +++
 .../compaction/CompactionStrategyManager.java   |  35 -
 .../apache/cassandra/metrics/TableMetrics.java  |  13 ++
 .../org/apache/cassandra/tools/NodeProbe.java   |   1 +
 .../tools/nodetool/stats/StatsTable.java|   1 +
 .../tools/nodetool/stats/TableStatsHolder.java  |   1 +
 .../tools/nodetool/stats/TableStatsPrinter.java |   1 +
 .../CompactionStrategyManagerTest.java  | 131 ++-
 .../cassandra/io/sstable/LegacySSTableTest.java |  37 ++
 .../nodetool/stats/TableStatsPrinterTest.java   |   6 +
 .../nodetool/stats/TableStatsTestBase.java  |   1 +
 17 files changed, 362 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/cassandra/blob/d14a9266/CHANGES.txt
--
diff --git a/CHANGES.txt b/CHANGES.txt
index 25c237f..cad0e28 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Automatic sstable upgrades (CASSANDRA-14197)
  * Replace deprecated junit.framework.Assert usages with org.junit.Assert 
(CASSANDRA-14431)
  * cassandra-stress throws NPE if insert section isn't specified in user 
profile (CASSSANDRA-14426)
  * List clients by protocol versions `nodetool clientstats --by-protocol` 
(CASSANDRA-14335)

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d14a9266/NEWS.txt
--
diff --git a/NEWS.txt b/NEWS.txt
index a13f633..4885a12 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -38,6 +38,9 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
+   - There is now an option to automatically upgrade sstables after Cassandra 
upgrade, enable
+ either in `cassandra.yaml:automatic_sstable_upgrade` or via JMX during 
runtime. See
+ CASSANDRA-14197.
- `nodetool refresh` has been deprecated in favour of `nodetool import` - 
see CASSANDRA-6719
  for details
- An experimental option to compare all merkle trees together has been 
added - for example, in

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d14a9266/conf/cassandra.yaml
--
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 7e4b2c2..7cc9e32 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -1178,3 +1178,9 @@ back_pressure_strategy:
 # The full query log will recrusively delete the contents of this path at
 # times. Don't place links in this directory to other parts of the filesystem.
 #full_query_log_dir: /tmp/cassandrafullquerylog
+
+# Automatically upgrade sstables after upgrade - if there is no ordinary 
compaction to do, the
+# oldest non-upgraded sstable will get upgraded to the latest version
+# automatic_sstable_upgrade: false
+# Limit the number of concurrent sstable upgrades
+# max_concurrent_automatic_sstable_upgrades: 1

http://git-wip-us.apache.org/repos/asf/cassandra/blob/d14a9266/src/java/org/apache/cassandra/config/Config.java
--
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index aa4b028..2c28796 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -377,6 +377,8 @@ public class Config
 // parameters to adjust how much to delay startup until a certain amount 
of the cluster is connect to and marked alive
 public int block_for_peers_percentage = 70;
 public int block_for_peers_timeout_in_secs = 10;
+public volatile boolean automatic_sstable_upgrade = false;
+public volatile int max_concurrent_automatic_sstable_upgrades = 1;
 
 
 /**

http://git-wip-us.apache.org/repos/asf/ca

[jira] [Updated] (CASSANDRA-14197) SSTable upgrade should be automatic

2018-05-08 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14197:

   Resolution: Fixed
Fix Version/s: (was: 4.x)
   4.0
   Status: Resolved  (was: Patch Available)

Committed to trunk as {{d14a9266c7ddff0589fdbe7a1836217b8bb8b394}}, thanks!

> SSTable upgrade should be automatic
> ---
>
> Key: CASSANDRA-14197
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14197
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Major
> Fix For: 4.0
>
>
> Upgradesstables should run automatically on node upgrade



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-13142) Upgradesstables cancels compactions unnecessarily

2018-05-08 Thread Marcus Eriksson (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-13142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16468467#comment-16468467
 ] 

Marcus Eriksson commented on CASSANDRA-13142:
-

[~KurtG] you think we can close this now that we have CASSANDRA-14197 ?

> Upgradesstables cancels compactions unnecessarily
> -
>
> Key: CASSANDRA-13142
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13142
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Kurt Greaves
>Assignee: Kurt Greaves
>Priority: Major
> Attachments: 13142-v1.patch
>
>
> Since at least 1.2 upgradesstables will cancel any compactions bar 
> validations when run. This was originally determined as a non-issue in 
> CASSANDRA-3430 however can be quite annoying (especially with STCS) as a 
> compaction will output the new version anyway. Furthermore, as per 
> CASSANDRA-12243 it also stops things like view builds and I assume secondary 
> index builds as well which is not ideal.
> We should avoid cancelling compactions unnecessarily.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM

2018-05-08 Thread Damien Stevenson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Stevenson updated CASSANDRA-10789:
-
Attachment: (was: 10789-trunk.txt)

> Allow DBAs to kill individual client sessions from certain IP(s) and 
> temporarily block subsequent connections without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.
> Note due to CQL driver's automated reconnection, simply killing the currently 
> connected client session will not work well, so the JMX parameter should be 
> an IP address or a list of IP addresses, so that the Cassandra server can 
> terminate existing connection with that IP, and block future connection 
> attempts from that IP for the remaining time until the JVM is restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-10789) Allow DBAs to kill individual client sessions from certain IP(s) and temporarily block subsequent connections without bouncing JVM

2018-05-08 Thread Damien Stevenson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-10789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damien Stevenson updated CASSANDRA-10789:
-
Attachment: 10789-trunk.txt

> Allow DBAs to kill individual client sessions from certain IP(s) and 
> temporarily block subsequent connections without bouncing JVM
> --
>
> Key: CASSANDRA-10789
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10789
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Coordination
>Reporter: Wei Deng
>Assignee: Damien Stevenson
>Priority: Major
> Fix For: 4.x
>
> Attachments: 10789-trunk-dtest.txt, 10789-trunk.txt
>
>
> In production, there could be hundreds of clients connected to a Cassandra 
> cluster (maybe even from different applications), and if they use DataStax 
> Java Driver, each client will establish at least one TCP connection to a 
> Cassandra server (see 
> https://datastax.github.io/java-driver/2.1.9/features/pooling/). This is all 
> normal and at any given time, you can indeed see hundreds of ESTABLISHED 
> connections to port 9042 on a C* server (from netstat -na). The problem is 
> that sometimes when a C* cluster is under heavy load, when the DBA identifies 
> some client session that sends abusive amount of traffic to the C* server and 
> would like to stop it, they would like a lightweight approach rather than 
> shutting down the JVM or rolling restart the whole cluster to kill all 
> hundreds of connections in order to kill a single client session. If the DBA 
> had root privilege, they would have been able to do something at the OS 
> network level to achieve the same goal but oftentimes enterprise DBA role is 
> separate from OS sysadmin role, so the DBAs usually don't have that privilege.
> This is especially helpful when you have a multi-tenant C* cluster and you 
> want to have the impact for handling such client to be minimal to the other 
> applications. This feature (killing individual session) seems to be a common 
> feature in other databases (regardless of whether the client has some 
> reconnect logic or not). It could be implemented as a JMX MBean method and 
> exposed through nodetool to the DBAs.
> Note due to CQL driver's automated reconnection, simply killing the currently 
> connected client session will not work well, so the JMX parameter should be 
> an IP address or a list of IP addresses, so that the Cassandra server can 
> terminate existing connection with that IP, and block future connection 
> attempts from that IP for the remaining time until the JVM is restarted.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org