[jira] [Commented] (CASSANDRA-14885) Add a new tool to dump audit logs

2019-01-23 Thread Vinay Chella (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749710#comment-16749710
 ] 

Vinay Chella commented on CASSANDRA-14885:
--

Thanks for the review [~krummas]. I have rebased on latest trunk and fixed your 
review comments. I have also added a batch file for {{auditlogviewer}} tool.

 
||Branch||utests||dtests||
|[patch|https://github.com/vinaykumarchella/cassandra/commits/trunk_CASSANDRA-14885]|[Circle
 CI|https://circleci.com/gh/vinaykumarchella/cassandra/340]|[Circle 
CI|https://circleci.com/gh/vinaykumarchella/cassandra/339]|

 

> Add a new tool to dump audit logs
> -
>
> Key: CASSANDRA-14885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14885
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Vinay Chella
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.0
>
>
> As part of CASSANDRA-12151, AuditLogging feature uses 
> [fqltool|https://github.com/apache/cassandra/blob/trunk/tools/bin/fqltool] to 
> dump audit log file contents in human-readable text format from binary 
> logging format ([BinLog| 
> https://issues.apache.org/jira/browse/CASSANDRA-13983]).
> The goal of this ticket is to create a separate tool to dump audit logs 
> instead of relying fqltool and let fqltool be full query log specific.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14885) Add a new tool to dump audit logs

2019-01-23 Thread Vinay Chella (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay Chella updated CASSANDRA-14885:
-
Status: Patch Available  (was: Open)

> Add a new tool to dump audit logs
> -
>
> Key: CASSANDRA-14885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14885
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Vinay Chella
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.0
>
>
> As part of CASSANDRA-12151, AuditLogging feature uses 
> [fqltool|https://github.com/apache/cassandra/blob/trunk/tools/bin/fqltool] to 
> dump audit log file contents in human-readable text format from binary 
> logging format ([BinLog| 
> https://issues.apache.org/jira/browse/CASSANDRA-13983]).
> The goal of this ticket is to create a separate tool to dump audit logs 
> instead of relying fqltool and let fqltool be full query log specific.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14989) NullPointerException when SELECTing token() on only one part of a two-part partition key

2019-01-23 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14989:
--
Priority: Minor  (was: Major)

> NullPointerException when SELECTing token() on only one part of a two-part 
> partition key
> 
>
> Key: CASSANDRA-14989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14989
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
> Environment: Using {{cqlsh 5.0.1}} on a Mac OS X host system with 
> Cassandra 3.11.3 running via Docker for Mac from the official 
> {{cassandra:3.11.3}} image.
>Reporter: Manuel Kießling
>Assignee: Dinesh Joshi
>Priority: Minor
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> I have the following schema:
> {code}
> CREATE TABLE query_tests.cart_snapshots (
> cart_id uuid,
> realm text,
> snapshot_id timeuuid,
> state text,
> PRIMARY KEY ((cart_id, realm), snapshot_id)
> ) WITH CLUSTERING ORDER BY (snapshot_id DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> In cqlsh, I try the following query:
> {code}select token(cart_id) from cart_snapshots ;{code}
> This results in cqlsh returning {{ServerError: 
> java.lang.NullPointerException}}, and the following error in the server log:
> {code}
> DC1N1_1  | ERROR [Native-Transport-Requests-1] 2019-01-16 12:17:52,075 
> QueryMessage.java:129 - Unexpected error during query
> DC1N1_1  | java.lang.NullPointerException: null
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:356) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:349) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.config.CFMetaData.serializePartitionKey(CFMetaData.java:805)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.functions.TokenFct.execute(TokenFct.java:59) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.ScalarFunctionSelector.getOutput(ScalarFunctionSelector.java:61)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$SelectionWithProcessing$1.getOutputRow(Selection.java:666)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:492)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.newRow(Selection.java:458)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:860)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:790)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:438)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:416)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:289)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:117)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:224)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:255) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:240) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N

[jira] [Commented] (CASSANDRA-14989) NullPointerException when SELECTing token() on only one part of a two-part partition key

2019-01-23 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749977#comment-16749977
 ] 

Aleksey Yeschenko commented on CASSANDRA-14989:
---

The patch is fine, but while reviewing it, I noticed a closely related 
pre-existing bug in the logic that has been preserved by this patch.

Specifically, if there is a UDF that's also named {{token}}, and the correct 
keyspace is set via {{USE}} command, then invoking {{token()}} function on a 
table, with all the right arguments, would fail to resolve to the UDF unless 
fully qualified.

{code}
create function test.token(val double) returns null on null input returns int 
language java as 'return 0.0;';
create table test(id int primary key, col double);

select token(col) from test;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Type 
error: col cannot be passed as argument 0 of function system.token of type int"

select token(1.0) from test;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Type 
error: 1.0 cannot be passed as argument 0 of function system.token of type int"
{code}

Using fully-qualified name returns the expected result:
{code}
cqlsh:test> select test.token(1.0) from test;

 test.token(1.0)
-
   0

(1 rows)
cqlsh:test> select test.token(col) from test;

 test.token(col)
-
   0
{code}

However I'm failing to see a reason why non-qualified invocation shouldn't 
follow the general logic of filtering candidate functions for the best match. 
Short-circuiting here, and forcing validation that early seems suboptimal to me.

> NullPointerException when SELECTing token() on only one part of a two-part 
> partition key
> 
>
> Key: CASSANDRA-14989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14989
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
> Environment: Using {{cqlsh 5.0.1}} on a Mac OS X host system with 
> Cassandra 3.11.3 running via Docker for Mac from the official 
> {{cassandra:3.11.3}} image.
>Reporter: Manuel Kießling
>Assignee: Dinesh Joshi
>Priority: Minor
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> I have the following schema:
> {code}
> CREATE TABLE query_tests.cart_snapshots (
> cart_id uuid,
> realm text,
> snapshot_id timeuuid,
> state text,
> PRIMARY KEY ((cart_id, realm), snapshot_id)
> ) WITH CLUSTERING ORDER BY (snapshot_id DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> In cqlsh, I try the following query:
> {code}select token(cart_id) from cart_snapshots ;{code}
> This results in cqlsh returning {{ServerError: 
> java.lang.NullPointerException}}, and the following error in the server log:
> {code}
> DC1N1_1  | ERROR [Native-Transport-Requests-1] 2019-01-16 12:17:52,075 
> QueryMessage.java:129 - Unexpected error during query
> DC1N1_1  | java.lang.NullPointerException: null
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:356) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:349) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.config.CFMetaData.serializePartitionKey(CFMetaData.java:805)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.functions.TokenFct.execute(TokenFct.java:59) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.ScalarFunctionSelector.getOutput(ScalarFunctionSelector.java:61)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$SelectionWithProcessing$1.getOutputRow(Selection.java:666)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:492)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassand

[jira] [Updated] (CASSANDRA-14989) NullPointerException when SELECTing token() on only one part of a two-part partition key

2019-01-23 Thread Aleksey Yeschenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Yeschenko updated CASSANDRA-14989:
--
Reproduced In: 3.11.3, 3.0.17, 4.0  (was: 3.0.17, 3.11.3, 4.0)
   Status: Open  (was: Patch Available)

> NullPointerException when SELECTing token() on only one part of a two-part 
> partition key
> 
>
> Key: CASSANDRA-14989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14989
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
> Environment: Using {{cqlsh 5.0.1}} on a Mac OS X host system with 
> Cassandra 3.11.3 running via Docker for Mac from the official 
> {{cassandra:3.11.3}} image.
>Reporter: Manuel Kießling
>Assignee: Dinesh Joshi
>Priority: Minor
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> I have the following schema:
> {code}
> CREATE TABLE query_tests.cart_snapshots (
> cart_id uuid,
> realm text,
> snapshot_id timeuuid,
> state text,
> PRIMARY KEY ((cart_id, realm), snapshot_id)
> ) WITH CLUSTERING ORDER BY (snapshot_id DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> In cqlsh, I try the following query:
> {code}select token(cart_id) from cart_snapshots ;{code}
> This results in cqlsh returning {{ServerError: 
> java.lang.NullPointerException}}, and the following error in the server log:
> {code}
> DC1N1_1  | ERROR [Native-Transport-Requests-1] 2019-01-16 12:17:52,075 
> QueryMessage.java:129 - Unexpected error during query
> DC1N1_1  | java.lang.NullPointerException: null
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:356) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:349) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.config.CFMetaData.serializePartitionKey(CFMetaData.java:805)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.functions.TokenFct.execute(TokenFct.java:59) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.ScalarFunctionSelector.getOutput(ScalarFunctionSelector.java:61)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$SelectionWithProcessing$1.getOutputRow(Selection.java:666)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:492)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.newRow(Selection.java:458)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:860)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:790)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:438)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:416)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:289)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:117)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:224)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:255) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.QueryProcessor

[jira] [Comment Edited] (CASSANDRA-14989) NullPointerException when SELECTing token() on only one part of a two-part partition key

2019-01-23 Thread Aleksey Yeschenko (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16749977#comment-16749977
 ] 

Aleksey Yeschenko edited comment on CASSANDRA-14989 at 1/23/19 1:33 PM:


The patch is fine, but while reviewing it, I noticed a closely related 
pre-existing bug in the logic that has been preserved by this patch.

Specifically, if there is a UDF that's also named {{token}}, and the correct 
keyspace is set via {{USE}} command, then invoking {{token()}} function on a 
table, with all the right arguments, would fail to resolve to the UDF unless 
fully qualified.

{code}
create function test.token(val double) returns null on null input returns int 
language java as 'return 0.0;';
create table test(id int primary key, col double);

select token(col) from test;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Type 
error: col cannot be passed as argument 0 of function system.token of type int"

select token(1.0) from test;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Type 
error: 1.0 cannot be passed as argument 0 of function system.token of type int"
{code}

Using fully-qualified name returns the expected result:
{code}
cqlsh:test> select test.token(1.0) from test;

 test.token(1.0)
-
   0

(1 rows)
cqlsh:test> select test.token(col) from test;

 test.token(col)
-
   0
{code}

However I'm failing to see a reason why non-qualified invocation shouldn't 
follow the general logic of filtering candidate functions for the best match. 
Short-circuiting here, and forcing validation that early seems suboptimal to me.

EDIT: I reckon the same applies to {{fromJson}} and {{toJson}} functions.


was (Author: iamaleksey):
The patch is fine, but while reviewing it, I noticed a closely related 
pre-existing bug in the logic that has been preserved by this patch.

Specifically, if there is a UDF that's also named {{token}}, and the correct 
keyspace is set via {{USE}} command, then invoking {{token()}} function on a 
table, with all the right arguments, would fail to resolve to the UDF unless 
fully qualified.

{code}
create function test.token(val double) returns null on null input returns int 
language java as 'return 0.0;';
create table test(id int primary key, col double);

select token(col) from test;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Type 
error: col cannot be passed as argument 0 of function system.token of type int"

select token(1.0) from test;
InvalidRequest: Error from server: code=2200 [Invalid query] message="Type 
error: 1.0 cannot be passed as argument 0 of function system.token of type int"
{code}

Using fully-qualified name returns the expected result:
{code}
cqlsh:test> select test.token(1.0) from test;

 test.token(1.0)
-
   0

(1 rows)
cqlsh:test> select test.token(col) from test;

 test.token(col)
-
   0
{code}

However I'm failing to see a reason why non-qualified invocation shouldn't 
follow the general logic of filtering candidate functions for the best match. 
Short-circuiting here, and forcing validation that early seems suboptimal to me.

> NullPointerException when SELECTing token() on only one part of a two-part 
> partition key
> 
>
> Key: CASSANDRA-14989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14989
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
> Environment: Using {{cqlsh 5.0.1}} on a Mac OS X host system with 
> Cassandra 3.11.3 running via Docker for Mac from the official 
> {{cassandra:3.11.3}} image.
>Reporter: Manuel Kießling
>Assignee: Dinesh Joshi
>Priority: Minor
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> I have the following schema:
> {code}
> CREATE TABLE query_tests.cart_snapshots (
> cart_id uuid,
> realm text,
> snapshot_id timeuuid,
> state text,
> PRIMARY KEY ((cart_id, realm), snapshot_id)
> ) WITH CLUSTERING ORDER BY (snapshot_id DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> A

[cassandra] branch trunk updated: Add a new tool to dump audit logs

2019-01-23 Thread marcuse
This is an automated email from the ASF dual-hosted git repository.

marcuse pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 7d138e2  Add a new tool to dump audit logs
7d138e2 is described below

commit 7d138e20ea987d44fffbc47de4674b253b7431ff
Author: Vinay Chella 
AuthorDate: Mon Dec 17 23:42:14 2018 -0800

Add a new tool to dump audit logs

Patch by Vinay Chella; reviewed by marcuse for CASSANDRA-14885
---
 CHANGES.txt|   1 +
 doc/source/operating/audit_logging.rst |  27 ++-
 .../org/apache/cassandra/audit/BinAuditLogger.java |  14 +-
 .../org/apache/cassandra/tools/AuditLogViewer.java | 212 +
 .../apache/cassandra/tools/AuditLogViewerTest.java |  84 
 tools/bin/auditlogviewer   |  49 +
 tools/bin/auditlogviewer.bat   |  41 
 7 files changed, 423 insertions(+), 5 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index ec59fa6..a0c51f0 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.0
+ * Add a new tool to dump audit logs (CASSANDRA-14885)
  * Fix generating javadoc with Java11 (CASSANDRA-14988)
  * Only cancel conflicting compactions when starting anticompactions and sub 
range compactions (CASSANDRA-14935)
  * Use a stub IndexRegistry for non-daemon use cases (CASSANDRA-14938)
diff --git a/doc/source/operating/audit_logging.rst 
b/doc/source/operating/audit_logging.rst
index 6cfd141..068209e 100644
--- a/doc/source/operating/audit_logging.rst
+++ b/doc/source/operating/audit_logging.rst
@@ -143,19 +143,44 @@ NodeTool command to reload AuditLog filters
 ``enableauditlog``: NodeTool enableauditlog command can be used to reload 
auditlog filters when called with default or previous ``loggername`` and 
updated filters
 
 E.g.,
+
 ::
 
 nodetool enableauditlog --loggername  
--included-keyspaces 
 
 
 
+View the contents of AuditLog Files
+^^
+``auditlogviewer`` is the new tool introduced to help view the contents of 
binlog file in human readable text format.
+
+::
 
+   auditlogviewer  [...] [options]
+
+Options
+
 
+``-f,--follow`` 
+   Upon reacahing the end of the log continue indefinitely
+   waiting for more records
+``-r,--roll_cycle``
+   How often to roll the log file was rolled. May be
+   necessary for Chronicle to correctly parse file 
names. (MINUTELY, HOURLY,
+   DAILY). Default HOURLY.
 
+``-h,--help``
+ display this help message
 
+For example, to dump the contents of audit log files on the console
+
+::
+
+   auditlogviewer /logs/cassandra/audit
 
 Sample output
-
+"
+
 ::
 
 LogMessage: 
user:anonymous|host:localhost/X.X.X.X|source:/X.X.X.X|port:60878|timestamp:1521158923615|type:USE_KS|category:DDL|ks:dev1|operation:USE
 "dev1"
diff --git a/src/java/org/apache/cassandra/audit/BinAuditLogger.java 
b/src/java/org/apache/cassandra/audit/BinAuditLogger.java
index bd3a158..23b9977 100644
--- a/src/java/org/apache/cassandra/audit/BinAuditLogger.java
+++ b/src/java/org/apache/cassandra/audit/BinAuditLogger.java
@@ -19,6 +19,7 @@ package org.apache.cassandra.audit;
 
 import java.nio.file.Paths;
 
+import com.google.common.annotations.VisibleForTesting;
 import com.google.common.primitives.Ints;
 
 import net.openhft.chronicle.wire.WireOut;
@@ -29,6 +30,10 @@ import org.apache.cassandra.utils.concurrent.WeightedQueue;
 
 public class BinAuditLogger extends BinLogAuditLogger implements IAuditLogger
 {
+public static final String TYPE = "type";
+public static final String AUDITLOG_TYPE = "AuditLog";
+public static final String AUDITLOG_MESSAGE = "message";
+
 public BinAuditLogger()
 {
 // due to the way that IAuditLogger instance are created in 
AuditLogManager, via reflection, we can't assume
@@ -56,11 +61,12 @@ public class BinAuditLogger extends BinLogAuditLogger 
implements IAuditLogger
 super.logRecord(new Message(auditLogEntry.getLogString()), binLog);
 }
 
-static class Message extends BinLog.ReleaseableWriteMarshallable 
implements WeightedQueue.Weighable
+@VisibleForTesting
+public static class Message extends BinLog.ReleaseableWriteMarshallable 
implements WeightedQueue.Weighable
 {
 private final String message;
 
-Message(String message)
+public Message(String message)
 {
 this.message = message;
 }
@@ -68,8 +74,8 @@ public class BinAuditLogger extends BinLogAuditLogger 
implements IAuditLogger
 @Override
 public void writeMarshallable(WireOut wire)
 {
-wire.write("type").text("AuditLog");
-wire.write("message").text(message);
+wire.write(TYPE).text(AUDITLO

[jira] [Updated] (CASSANDRA-14885) Add a new tool to dump audit logs

2019-01-23 Thread Marcus Eriksson (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-14885:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

and committed as {{7d138e20ea987d44fffbc47de4674b253b7431ff}} - thanks!

> Add a new tool to dump audit logs
> -
>
> Key: CASSANDRA-14885
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14885
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Vinay Chella
>Assignee: Vinay Chella
>Priority: Major
> Fix For: 4.0
>
>
> As part of CASSANDRA-12151, AuditLogging feature uses 
> [fqltool|https://github.com/apache/cassandra/blob/trunk/tools/bin/fqltool] to 
> dump audit log file contents in human-readable text format from binary 
> logging format ([BinLog| 
> https://issues.apache.org/jira/browse/CASSANDRA-13983]).
> The goal of this ticket is to create a separate tool to dump audit logs 
> instead of relying fqltool and let fqltool be full query log specific.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14806) CircleCI workflow improvements and Java 11 support

2019-01-23 Thread Marcus Eriksson (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750011#comment-16750011
 ] 

Marcus Eriksson commented on CASSANDRA-14806:
-

I added the upgrade tests target to the branches, runs;
* [2.2|https://circleci.com/gh/krummas/cassandra/1317#tests/containers/15] 
([reference 
run|https://circleci.com/gh/krummas/cassandra/1381#tests/containers/11])
* [3.0|https://circleci.com/gh/krummas/cassandra/1326] ([reference 
run|https://circleci.com/gh/krummas/cassandra/1382#tests/containers/11])
* [3.11|https://circleci.com/gh/krummas/cassandra/1385] ([reference 
run|https://circleci.com/gh/krummas/cassandra/1383#tests/containers/11])
* [trunk|https://circleci.com/gh/krummas/cassandra/1367] ([reference 
run|https://circleci.com/gh/krummas/cassandra/1384#tests/containers/11])

I have not yet looked in to why the patched runs are much quicker than the 
reference ones (8 minutes vs 1h)

> CircleCI workflow improvements and Java 11 support
> --
>
> Key: CASSANDRA-14806
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14806
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Build, Legacy/Testing
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
>
> The current CircleCI config could use some cleanup and improvements. First of 
> all, the config has been made more modular by using the new CircleCI 2.1 
> executors and command elements. Based on CASSANDRA-14713, there's now also a 
> Java 11 executor that will allow running tests under Java 11. The {{build}} 
> step will be done using Java 11 in all cases, so we can catch any regressions 
> for that and also test the Java 11 multi-jar artifact during dtests, that 
> we'd also create during the release process.
> The job workflow has now also been changed to make use of the [manual job 
> approval|https://circleci.com/docs/2.0/workflows/#holding-a-workflow-for-a-manual-approval]
>  feature, which now allows running dtest jobs only on request and not 
> automatically with every commit. The Java8 unit tests still do, but that 
> could also be easily changed if needed. See [example 
> workflow|https://circleci.com/workflow-run/be25579d-3cbb-4258-9e19-b1f571873850]
>  with start_ jobs being triggers needed manual approval for starting the 
> actual jobs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14503) Internode connection management is race-prone

2019-01-23 Thread Benedict (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750032#comment-16750032
 ] 

Benedict commented on CASSANDRA-14503:
--

It's next on my docket.

> Internode connection management is race-prone
> -
>
> Key: CASSANDRA-14503
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14503
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Streaming and Messaging
>Reporter: Sergio Bossa
>Assignee: Jason Brown
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Following CASSANDRA-8457, internode connection management has been rewritten 
> to rely on Netty, but the new implementation in 
> {{OutboundMessagingConnection}} seems quite race prone to me, in particular 
> on those two cases:
> * {{#finishHandshake()}} racing with {{#close()}}: i.e. in such case the 
> former could run into an NPE if the latter nulls the {{channelWriter}} (but 
> this is just an example, other conflicts might happen).
> * Connection timeout and retry racing with state changing methods: 
> {{connectionRetryFuture}} and {{connectionTimeoutFuture}} are cancelled when 
> handshaking or closing, but there's no guarantee those will be actually 
> cancelled (as they might be already running), so they might end up changing 
> the connection state concurrently with other methods (i.e. by unexpectedly 
> closing the channel or clearing the backlog).
> Overall, the thread safety of {{OutboundMessagingConnection}} is very 
> difficult to assess given the current implementation: I would suggest to 
> refactor it into a single-thread model, where all connection state changing 
> actions are enqueued on a single threaded scheduler, so that state 
> transitions can be clearly defined and checked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14968) Investigate GPG signing of deb and rpm repositories via bintray

2019-01-23 Thread Michael Shuler (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750077#comment-16750077
 ] 

Michael Shuler commented on CASSANDRA-14968:


Keep it simple. We do 2 things with rpms: we sign the packages, we sign the 
repository metadata. I have no idea without testing in a scratch repo at 
bintray if those can be different signatures, if the existing rpm signature is 
overwritten, etc.

What I did test was installing ignite from the instructions on their download 
page. The repository metadata is signed by bintray key, and yes, the metadata 
would need to be created after packages are upload, then that metadata would be 
signed by the bintray key after that step.

> Investigate GPG signing of deb and rpm repositories via bintray
> ---
>
> Key: CASSANDRA-14968
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14968
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Michael Shuler
>Priority: Major
>  Labels: packaging
>
> Currently, the release manager uploads debian packages and built/signed 
> metadata to a generic bintray repository. Perhaps we could utilize the GPG 
> signing feature of the repository, post-upload, via the bintray GPG signing 
> feature.
> https://www.jfrog.com/confluence/display/BT/Managing+Uploaded+Content#ManagingUploadedContent-GPGSigning
>  Depends on CASSANDRA-14967



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14970) New releases must supply SHA-256 and/or SHA-512 checksums

2019-01-23 Thread Michael Shuler (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750179#comment-16750179
 ] 

Michael Shuler commented on CASSANDRA-14970:


I tried switching the dependency from artifacts to release and tried a wildcard 
in the mvn-install task files, for example 
\{{file="${build.dir}/${final.name}-bin.tar.gz*"}}. I don't recall the exact 
error, but I just don't know ant well enough, so asked for some help.

Yes, we can just do the checksums and toss them in /dist/release/ in 
finish_release.sh, but that basically flaws the artifacts we're voting on. We 
should be voting on the entire artifact set, verified by the checksums and gpg 
signature. I'm just trying to look at the bigger picture and 
scripting/automating our release process to fix known broken things, as well as 
make it better/easier for multiple people to contribute to the release process, 
while trying to keep things stable/simple for user installs.

> New releases must supply SHA-256 and/or SHA-512 checksums
> -
>
> Key: CASSANDRA-14970
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14970
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: Blocker
> Fix For: 2.1.21, 2.2.14, 3.0.18, 3.11.4, 4.0
>
> Attachments: 
> 0001-Update-downloads-for-sha256-sha512-checksum-files.patch, 
> 0001-Update-release-checksum-algorithms-to-SHA-256-SHA-512.patch, 
> ant-publish-checksum-fail.jpg, build_cassandra-2.1.png, build_trunk.png
>
>
> Release policy was updated around 9/2018 to state:
> "For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed."
> build.xml needs to be updated from MD5 & SHA-1 to, at least, SHA-256 or both. 
> cassandra-builds/cassandra-release scripts need to be updated to work with 
> the new checksum files.
> http://www.apache.org/dev/release-distribution#sigs-and-sums



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14991) SSL Cert Hot Reloading should check for sanity of the new keystore/truststore before loading it

2019-01-23 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750243#comment-16750243
 ] 

Ariel Weisberg commented on CASSANDRA-14991:


RE #3, but it's always hardcoded to true when the server actually goes to build 
the SSL certs?
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/async/NettyFactory.java#L295
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/net/async/NettyFactory.java#L372
Why should we test with different parameters then are used when we actually go 
to construct the SSL context? Is the SSL context being constructed with invalid 
parameters?

> SSL Cert Hot Reloading should check for sanity of the new keystore/truststore 
> before loading it
> ---
>
> Key: CASSANDRA-14991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14991
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: security
> Fix For: 4.0
>
>
> SSL Cert Hot Reloading assumes that the keystore & truststore are valid. 
> However, a corrupt store or a password mismatch can cause Cassandra to fail 
> accepting new connections as we throw away the old {{SslContext}}. This patch 
> will ensure that we check the sanity of the certificates during startup and 
> during hot reloading. This should protect against bad key/trust stores. As 
> part of this PR, I have cleaned up the code a bit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14991) SSL Cert Hot Reloading should check for sanity of the new keystore/truststore before loading it

2019-01-23 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750360#comment-16750360
 ] 

Dinesh Joshi commented on CASSANDRA-14991:
--

{{OptionalSecureInitializer}} inherits from {{AbstractSecureInitializer}}. It 
passes in {{require_client_auth}} which is sourced from the configuration. So 
we could generate a {{SslContext}} without instantiating the truststore. See 
[here|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/transport/Server.java#L409]

> SSL Cert Hot Reloading should check for sanity of the new keystore/truststore 
> before loading it
> ---
>
> Key: CASSANDRA-14991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14991
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: security
> Fix For: 4.0
>
>
> SSL Cert Hot Reloading assumes that the keystore & truststore are valid. 
> However, a corrupt store or a password mismatch can cause Cassandra to fail 
> accepting new connections as we throw away the old {{SslContext}}. This patch 
> will ensure that we check the sanity of the certificates during startup and 
> during hot reloading. This should protect against bad key/trust stores. As 
> part of this PR, I have cleaned up the code a bit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14970) New releases must supply SHA-256 and/or SHA-512 checksums

2019-01-23 Thread Stefan Podkowinski (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750382#comment-16750382
 ] 

Stefan Podkowinski commented on CASSANDRA-14970:


What you're referring to in the ticket description is the distribution policy, 
not the release policy. The later doesn't mention any requirement for PMCs to 
verify checksum, only the detached signature. So I don't see any need to 
generate any checksums at all, before voting and eventually copying new 
artifacts into the dist svn tree. I'd also argue that generating checksums 
locally in finish_release.sh will make things necessarily more complex, 
compared to generating them via ant and upload+download them from nexus again 
later and then copy to dist.

> New releases must supply SHA-256 and/or SHA-512 checksums
> -
>
> Key: CASSANDRA-14970
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14970
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Michael Shuler
>Assignee: Michael Shuler
>Priority: Blocker
> Fix For: 2.1.21, 2.2.14, 3.0.18, 3.11.4, 4.0
>
> Attachments: 
> 0001-Update-downloads-for-sha256-sha512-checksum-files.patch, 
> 0001-Update-release-checksum-algorithms-to-SHA-256-SHA-512.patch, 
> ant-publish-checksum-fail.jpg, build_cassandra-2.1.png, build_trunk.png
>
>
> Release policy was updated around 9/2018 to state:
> "For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
> supply MD5 or SHA-1. Existing releases do not need to be changed."
> build.xml needs to be updated from MD5 & SHA-1 to, at least, SHA-256 or both. 
> cassandra-builds/cassandra-release scripts need to be updated to work with 
> the new checksum files.
> http://www.apache.org/dev/release-distribution#sigs-and-sums



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14991) SSL Cert Hot Reloading should check for sanity of the new keystore/truststore before loading it

2019-01-23 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750374#comment-16750374
 ] 

Ariel Weisberg commented on CASSANDRA-14991:


That is for the client -> server not server -> server. Server -> server always 
hard codes it to true. So why wouldn't we test the server context the way it 
would be retrieved when we actually need to retrieve them?

> SSL Cert Hot Reloading should check for sanity of the new keystore/truststore 
> before loading it
> ---
>
> Key: CASSANDRA-14991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14991
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: security
> Fix For: 4.0
>
>
> SSL Cert Hot Reloading assumes that the keystore & truststore are valid. 
> However, a corrupt store or a password mismatch can cause Cassandra to fail 
> accepting new connections as we throw away the old {{SslContext}}. This patch 
> will ensure that we check the sanity of the certificates during startup and 
> during hot reloading. This should protect against bad key/trust stores. As 
> part of this PR, I have cleaned up the code a bit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14993) Catch CorruptSSTableExceptions and FSErrors in ALAExecutorService

2019-01-23 Thread Ariel Weisberg (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750412#comment-16750412
 ] 

Ariel Weisberg commented on CASSANDRA-14993:


Do we ever want to skip JVMStability inspector? It checks all the causes so 
while the top level might be FSError it may contain something else.

Should we be inspecting nested exceptions for FSError?

> Catch CorruptSSTableExceptions and FSErrors in ALAExecutorService
> -
>
> Key: CASSANDRA-14993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14993
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Stefan Podkowinski
>Assignee: Stefan Podkowinski
>Priority: Major
> Fix For: 3.0.x, 3.11.x, 4.0.x
>
>
> Actively handling CorruptSSTableExceptions and FSErrors currently only 
> happens during opening of sstables and in the default exception handler. 
> What's missing is to catch these in AbstractLocalAwareExecutorService as 
> well. Therefor I propose to add calls to 
> FileUtils.handleCorruptSSTable/handleFSError there, too, so we don't miss 
> invoking the disk failure policy in that case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14991) SSL Cert Hot Reloading should check for sanity of the new keystore/truststore before loading it

2019-01-23 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750421#comment-16750421
 ] 

Dinesh Joshi commented on CASSANDRA-14991:
--

You're right Ariel. I have pushed an update. Hopefully that fixes the confusion 
:)

> SSL Cert Hot Reloading should check for sanity of the new keystore/truststore 
> before loading it
> ---
>
> Key: CASSANDRA-14991
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14991
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Encryption
>Reporter: Dinesh Joshi
>Assignee: Dinesh Joshi
>Priority: Major
>  Labels: security
> Fix For: 4.0
>
>
> SSL Cert Hot Reloading assumes that the keystore & truststore are valid. 
> However, a corrupt store or a password mismatch can cause Cassandra to fail 
> accepting new connections as we throw away the old {{SslContext}}. This patch 
> will ensure that we check the sanity of the certificates during startup and 
> during hot reloading. This should protect against bad key/trust stores. As 
> part of this PR, I have cleaned up the code a bit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14922) In JVM dtests need to clean up after instance shutdown

2019-01-23 Thread Joseph Lynch (JIRA)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Lynch updated CASSANDRA-14922:
-
Attachment: LeakedNativeMemory.png

> In JVM dtests need to clean up after instance shutdown
> --
>
> Key: CASSANDRA-14922
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14922
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
> Attachments: AllThreadsStopped.png, ClassLoadersRetaining.png, 
> LeakedNativeMemory.png, Leaking_Metrics_On_Shutdown.png, 
> MainClassRetaining.png, MemoryReclaimedFix.png, 
> Metaspace_Actually_Collected.png, OnlyThreeRootsLeft.png, 
> no_more_references.png
>
>
> Currently the unit tests are failing on circleci ([example 
> one|https://circleci.com/gh/jolynch/cassandra/300#tests/containers/1], 
> [example 
> two|https://circleci.com/gh/rustyrazorblade/cassandra/44#tests/containers/1]) 
> because we use a small container (medium) for unit tests by default and the 
> in JVM dtests are leaking a few hundred megabytes of memory per test right 
> now. This is not a big deal because the dtest runs with the larger containers 
> continue to function fine as well as local testing as the number of in JVM 
> dtests is not yet high enough to cause a problem with more than 2GB of 
> available heap. However we should fix the memory leak so that going forwards 
> we can add more in JVM dtests without worry.
> I've been working with [~ifesdjeen] to debug, and the issue appears to be 
> unreleased Table/Keyspace metrics (screenshot showing the leak attached). I 
> believe that we have a few potential issues that are leading to the leaks:
> 1. The 
> [{{Instance::shutdown}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/Instance.java#L328-L354]
>  method is not successfully cleaning up all the metrics created by the 
> {{CassandraMetricsRegistry}}
>  2. The 
> [{{TestCluster::close}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/TestCluster.java#L283]
>  method is not waiting for all the instances to finish shutting down and 
> cleaning up before continuing on
> 3. I'm not sure if this is an issue assuming we clear all metrics, but 
> [{{TableMetrics::release}}|https://github.com/apache/cassandra/blob/4ae229f5cd270c2b43475b3f752a7b228de260ea/src/java/org/apache/cassandra/metrics/TableMetrics.java#L951]
>  does not release all the metric references (which could leak them)
> I am working on a patch which shuts down everything and assures that we do 
> not leak memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14922) In JVM dtests need to clean up after instance shutdown

2019-01-23 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750488#comment-16750488
 ] 

Joseph Lynch commented on CASSANDRA-14922:
--

I'm getting trunk failures again as of, and after an [IRC 
discussion|https://wilderness.apache.org/channels/?f=cassandra-dev/2019-01-23] 
with Benedict it looks like we may be leaking:
1. Off heap memory via some combination of the {{HintsBuffer}}, {{CommitLogs}} 
and the {{BufferPool}}
2. File descriptors are potentially leaked and it's unclear that we clean those 
up

What is odd is that according to a profiler attached while running one of the 
dtests in a for loop, most of the leaked native memory is either pending 
finalization or unreachable from GC roots:

 !LeakedNativeMemory.png! 

Afaict both {{HintsBuffer} and {{CommitLogs}} should be getting cleaned in the 
{{Instance::shutdown}} methods, although I don't think we clean the 
{{BufferPool}}.

Continuing to investigate this so that we can have green runs on trunk again.

> In JVM dtests need to clean up after instance shutdown
> --
>
> Key: CASSANDRA-14922
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14922
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
> Attachments: AllThreadsStopped.png, ClassLoadersRetaining.png, 
> LeakedNativeMemory.png, Leaking_Metrics_On_Shutdown.png, 
> MainClassRetaining.png, MemoryReclaimedFix.png, 
> Metaspace_Actually_Collected.png, OnlyThreeRootsLeft.png, 
> no_more_references.png
>
>
> Currently the unit tests are failing on circleci ([example 
> one|https://circleci.com/gh/jolynch/cassandra/300#tests/containers/1], 
> [example 
> two|https://circleci.com/gh/rustyrazorblade/cassandra/44#tests/containers/1]) 
> because we use a small container (medium) for unit tests by default and the 
> in JVM dtests are leaking a few hundred megabytes of memory per test right 
> now. This is not a big deal because the dtest runs with the larger containers 
> continue to function fine as well as local testing as the number of in JVM 
> dtests is not yet high enough to cause a problem with more than 2GB of 
> available heap. However we should fix the memory leak so that going forwards 
> we can add more in JVM dtests without worry.
> I've been working with [~ifesdjeen] to debug, and the issue appears to be 
> unreleased Table/Keyspace metrics (screenshot showing the leak attached). I 
> believe that we have a few potential issues that are leading to the leaks:
> 1. The 
> [{{Instance::shutdown}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/Instance.java#L328-L354]
>  method is not successfully cleaning up all the metrics created by the 
> {{CassandraMetricsRegistry}}
>  2. The 
> [{{TestCluster::close}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/TestCluster.java#L283]
>  method is not waiting for all the instances to finish shutting down and 
> cleaning up before continuing on
> 3. I'm not sure if this is an issue assuming we clear all metrics, but 
> [{{TableMetrics::release}}|https://github.com/apache/cassandra/blob/4ae229f5cd270c2b43475b3f752a7b228de260ea/src/java/org/apache/cassandra/metrics/TableMetrics.java#L951]
>  does not release all the metric references (which could leak them)
> I am working on a patch which shuts down everything and assures that we do 
> not leak memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14922) In JVM dtests need to clean up after instance shutdown

2019-01-23 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750488#comment-16750488
 ] 

Joseph Lynch edited comment on CASSANDRA-14922 at 1/23/19 10:43 PM:


I'm getting trunk failures again as of e871903d, and after an [IRC 
discussion|https://wilderness.apache.org/channels/?f=cassandra-dev/2019-01-23] 
with Benedict it looks like we may be leaking:
1. Off heap memory via some combination of the {{HintsBuffer}}, {{CommitLogs}} 
and the {{BufferPool}}
2. File descriptors are potentially leaked and it's unclear that we clean those 
up

What is odd is that according to a profiler attached while running one of the 
dtests in a for loop, most of the leaked native memory is either pending 
finalization or unreachable from GC roots:

 !LeakedNativeMemory.png! 

Afaict both {{HintsBuffer}} and {{CommitLogs}} should be getting cleaned in the 
{{Instance::shutdown}} methods, although I don't think we clean the 
{{BufferPool}}.

Continuing to investigate this so that we can have green runs on trunk again.


was (Author: jolynch):
I'm getting trunk failures again as of, and after an [IRC 
discussion|https://wilderness.apache.org/channels/?f=cassandra-dev/2019-01-23] 
with Benedict it looks like we may be leaking:
1. Off heap memory via some combination of the {{HintsBuffer}}, {{CommitLogs}} 
and the {{BufferPool}}
2. File descriptors are potentially leaked and it's unclear that we clean those 
up

What is odd is that according to a profiler attached while running one of the 
dtests in a for loop, most of the leaked native memory is either pending 
finalization or unreachable from GC roots:

 !LeakedNativeMemory.png! 

Afaict both {{HintsBuffer}} and {{CommitLogs}} should be getting cleaned in the 
{{Instance::shutdown}} methods, although I don't think we clean the 
{{BufferPool}}.

Continuing to investigate this so that we can have green runs on trunk again.

> In JVM dtests need to clean up after instance shutdown
> --
>
> Key: CASSANDRA-14922
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14922
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
> Attachments: AllThreadsStopped.png, ClassLoadersRetaining.png, 
> LeakedNativeMemory.png, Leaking_Metrics_On_Shutdown.png, 
> MainClassRetaining.png, MemoryReclaimedFix.png, 
> Metaspace_Actually_Collected.png, OnlyThreeRootsLeft.png, 
> no_more_references.png
>
>
> Currently the unit tests are failing on circleci ([example 
> one|https://circleci.com/gh/jolynch/cassandra/300#tests/containers/1], 
> [example 
> two|https://circleci.com/gh/rustyrazorblade/cassandra/44#tests/containers/1]) 
> because we use a small container (medium) for unit tests by default and the 
> in JVM dtests are leaking a few hundred megabytes of memory per test right 
> now. This is not a big deal because the dtest runs with the larger containers 
> continue to function fine as well as local testing as the number of in JVM 
> dtests is not yet high enough to cause a problem with more than 2GB of 
> available heap. However we should fix the memory leak so that going forwards 
> we can add more in JVM dtests without worry.
> I've been working with [~ifesdjeen] to debug, and the issue appears to be 
> unreleased Table/Keyspace metrics (screenshot showing the leak attached). I 
> believe that we have a few potential issues that are leading to the leaks:
> 1. The 
> [{{Instance::shutdown}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/Instance.java#L328-L354]
>  method is not successfully cleaning up all the metrics created by the 
> {{CassandraMetricsRegistry}}
>  2. The 
> [{{TestCluster::close}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/TestCluster.java#L283]
>  method is not waiting for all the instances to finish shutting down and 
> cleaning up before continuing on
> 3. I'm not sure if this is an issue assuming we clear all metrics, but 
> [{{TableMetrics::release}}|https://github.com/apache/cassandra/blob/4ae229f5cd270c2b43475b3f752a7b228de260ea/src/java/org/apache/cassandra/metrics/TableMetrics.java#L951]
>  does not release all the metric references (which could leak them)
> I am working on a patch which shuts down everything and assures that we do 
> not leak memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: co

[jira] [Comment Edited] (CASSANDRA-14922) In JVM dtests need to clean up after instance shutdown

2019-01-23 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750488#comment-16750488
 ] 

Joseph Lynch edited comment on CASSANDRA-14922 at 1/23/19 11:29 PM:


I'm getting trunk failures again as of e871903d, and after an [IRC 
discussion|https://wilderness.apache.org/channels/?f=cassandra-dev/2019-01-23] 
with Benedict it looks like we may be leaking:
 1. Off heap memory via some combination of the {{HintsBuffer}}, {{CommitLogs}} 
and the {{BufferPool}}
 2. File descriptors are potentially leaked and it's unclear that we clean 
those up

What is odd is that according to a profiler attached while running one of the 
dtests in a for loop, most of the leaked native memory is either pending 
finalization or unreachable from GC roots:

!LeakedNativeMemory.png!

Afaict both {{HintsBuffer}} and {{CommitLogs}} should be getting cleaned in the 
{{Instance::shutdown}} methods, although I don't think we clean the 
{{BufferPool}}.

Continuing to investigate this so that we can have green runs on trunk again.

*Edit:*

The test I'm running is applying this diff:
{noformat}
--- 
a/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest.java
+++ 
b/test/distributed/org/apache/cassandra/distributed/DistributedReadWritePathTest.java
@@ -27,6 +27,15 @@ import static 
org.apache.cassandra.net.MessagingService.Verb.READ_REPAIR;
 
 public class DistributedReadWritePathTest extends DistributedTestBase
 {
+@Test
+public void manyCoordinatedReads() throws Throwable
+{
+for (int i = 0; i < 20; i ++)
+{
+coordinatorRead();
+}
+}
+
 @Test
 public void coordinatorRead() throws Throwable
 {
{noformat}
Then I run the test suite and measure OS memory usage, for example if I re-wind 
trunk to the first patch (3dcde082) I see only 1.6GB allocated:
{noformat}
/usr/bin/time -f "mem=%K RSS=%M elapsed=%E cpu.sys=%S .user=%U" ant 
testclasslist -Dtest.classlistfile=/tmp/java_dtests_1_final.txt 
-Dtest.classlistprefix=distributed

testclasslist:
 [echo] Number of test runners: 1
[junit-timeout] Testsuite: 
org.apache.cassandra.distributed.DistributedReadWritePathTest
[junit-timeout] Testsuite: 
org.apache.cassandra.distributed.DistributedReadWritePathTest Tests run: 7, 
Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.473 sec
[junit-timeout] 
[junitreport] Processing 
/home/josephl/pg/cassandra/build/test/TESTS-TestSuites.xml to /tmp/null554485593
[junitreport] Loading stylesheet 
jar:file:/usr/share/ant/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
[junitreport] Transform time: 269ms
[junitreport] Deleting: /tmp/null554485593

BUILD SUCCESSFUL
Total time: 1 minute 47 seconds
mem=0 RSS=1606332 elapsed=1:47.37 cpu.sys=10.95 .user=159.99
{noformat}
But, if I use latest trunk (e871903d), I get 5GB:
{noformat}
testclasslist:
 [echo] Number of test runners: 1
[mkdir] Created dir: /home/josephl/pg/cassandra/build/test/cassandra
[mkdir] Created dir: /home/josephl/pg/cassandra/build/test/output
[junit-timeout] Testsuite: 
org.apache.cassandra.distributed.DistributedReadWritePathTest
[junit-timeout] Testsuite: 
org.apache.cassandra.distributed.DistributedReadWritePathTest Tests run: 7, 
Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.098 sec
[junit-timeout] 
[junitreport] Processing 
/home/josephl/pg/cassandra/build/test/TESTS-TestSuites.xml to /tmp/null126756458
[junitreport] Loading stylesheet 
jar:file:/usr/share/ant/lib/ant-junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl
[junitreport] Transform time: 330ms
[junitreport] Deleting: /tmp/null126756458

BUILD SUCCESSFUL
Total time: 2 minutes 15 seconds
mem=0 RSS=4962924 elapsed=2:15.93 cpu.sys=16.55 .user=284.28
{noformat}
Since the heap is 1GB, and we allocate about 256MB for the off-heap metaspace, 
1.6GB is much closer to what we expect than 5GB.

So ,.. something about the new executor system may be contributing. Continuing 
to dig in.


was (Author: jolynch):
I'm getting trunk failures again as of e871903d, and after an [IRC 
discussion|https://wilderness.apache.org/channels/?f=cassandra-dev/2019-01-23] 
with Benedict it looks like we may be leaking:
1. Off heap memory via some combination of the {{HintsBuffer}}, {{CommitLogs}} 
and the {{BufferPool}}
2. File descriptors are potentially leaked and it's unclear that we clean those 
up

What is odd is that according to a profiler attached while running one of the 
dtests in a for loop, most of the leaked native memory is either pending 
finalization or unreachable from GC roots:

 !LeakedNativeMemory.png! 

Afaict both {{HintsBuffer}} and {{CommitLogs}} should be getting cleaned in the 
{{Instance::shutdown}} methods, although I don't think we clean the 
{{BufferPool}}.

Continuing to investigate this so that we ca

[jira] [Comment Edited] (CASSANDRA-14922) In JVM dtests need to clean up after instance shutdown

2019-01-23 Thread Joseph Lynch (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750488#comment-16750488
 ] 

Joseph Lynch edited comment on CASSANDRA-14922 at 1/23/19 10:39 PM:


I'm getting trunk failures again as of, and after an [IRC 
discussion|https://wilderness.apache.org/channels/?f=cassandra-dev/2019-01-23] 
with Benedict it looks like we may be leaking:
1. Off heap memory via some combination of the {{HintsBuffer}}, {{CommitLogs}} 
and the {{BufferPool}}
2. File descriptors are potentially leaked and it's unclear that we clean those 
up

What is odd is that according to a profiler attached while running one of the 
dtests in a for loop, most of the leaked native memory is either pending 
finalization or unreachable from GC roots:

 !LeakedNativeMemory.png! 

Afaict both {{HintsBuffer}} and {{CommitLogs}} should be getting cleaned in the 
{{Instance::shutdown}} methods, although I don't think we clean the 
{{BufferPool}}.

Continuing to investigate this so that we can have green runs on trunk again.


was (Author: jolynch):
I'm getting trunk failures again as of, and after an [IRC 
discussion|https://wilderness.apache.org/channels/?f=cassandra-dev/2019-01-23] 
with Benedict it looks like we may be leaking:
1. Off heap memory via some combination of the {{HintsBuffer}}, {{CommitLogs}} 
and the {{BufferPool}}
2. File descriptors are potentially leaked and it's unclear that we clean those 
up

What is odd is that according to a profiler attached while running one of the 
dtests in a for loop, most of the leaked native memory is either pending 
finalization or unreachable from GC roots:

 !LeakedNativeMemory.png! 

Afaict both {{HintsBuffer} and {{CommitLogs}} should be getting cleaned in the 
{{Instance::shutdown}} methods, although I don't think we clean the 
{{BufferPool}}.

Continuing to investigate this so that we can have green runs on trunk again.

> In JVM dtests need to clean up after instance shutdown
> --
>
> Key: CASSANDRA-14922
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14922
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest
>Reporter: Joseph Lynch
>Assignee: Joseph Lynch
>Priority: Minor
> Fix For: 4.0
>
> Attachments: AllThreadsStopped.png, ClassLoadersRetaining.png, 
> LeakedNativeMemory.png, Leaking_Metrics_On_Shutdown.png, 
> MainClassRetaining.png, MemoryReclaimedFix.png, 
> Metaspace_Actually_Collected.png, OnlyThreeRootsLeft.png, 
> no_more_references.png
>
>
> Currently the unit tests are failing on circleci ([example 
> one|https://circleci.com/gh/jolynch/cassandra/300#tests/containers/1], 
> [example 
> two|https://circleci.com/gh/rustyrazorblade/cassandra/44#tests/containers/1]) 
> because we use a small container (medium) for unit tests by default and the 
> in JVM dtests are leaking a few hundred megabytes of memory per test right 
> now. This is not a big deal because the dtest runs with the larger containers 
> continue to function fine as well as local testing as the number of in JVM 
> dtests is not yet high enough to cause a problem with more than 2GB of 
> available heap. However we should fix the memory leak so that going forwards 
> we can add more in JVM dtests without worry.
> I've been working with [~ifesdjeen] to debug, and the issue appears to be 
> unreleased Table/Keyspace metrics (screenshot showing the leak attached). I 
> believe that we have a few potential issues that are leading to the leaks:
> 1. The 
> [{{Instance::shutdown}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/Instance.java#L328-L354]
>  method is not successfully cleaning up all the metrics created by the 
> {{CassandraMetricsRegistry}}
>  2. The 
> [{{TestCluster::close}}|https://github.com/apache/cassandra/blob/f22fec927de7ac29120c2f34de5b8cc1c695/test/distributed/org/apache/cassandra/distributed/TestCluster.java#L283]
>  method is not waiting for all the instances to finish shutting down and 
> cleaning up before continuing on
> 3. I'm not sure if this is an issue assuming we clear all metrics, but 
> [{{TableMetrics::release}}|https://github.com/apache/cassandra/blob/4ae229f5cd270c2b43475b3f752a7b228de260ea/src/java/org/apache/cassandra/metrics/TableMetrics.java#L951]
>  does not release all the metric references (which could leak them)
> I am working on a patch which shuts down everything and assures that we do 
> not leak memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...

[jira] [Commented] (CASSANDRA-14989) NullPointerException when SELECTing token() on only one part of a two-part partition key

2019-01-23 Thread Dinesh Joshi (JIRA)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16750609#comment-16750609
 ] 

Dinesh Joshi commented on CASSANDRA-14989:
--

[~iamaleksey] based on your feedback, I have changed the patch. It now not only 
fixes the NPE but also finds the closest matching function if the same function 
name exists in current & system keyspace. I've added a test to capture this 
behavior so we wont regress in the future.

> NullPointerException when SELECTing token() on only one part of a two-part 
> partition key
> 
>
> Key: CASSANDRA-14989
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14989
> Project: Cassandra
>  Issue Type: Bug
>  Components: CQL/Interpreter
> Environment: Using {{cqlsh 5.0.1}} on a Mac OS X host system with 
> Cassandra 3.11.3 running via Docker for Mac from the official 
> {{cassandra:3.11.3}} image.
>Reporter: Manuel Kießling
>Assignee: Dinesh Joshi
>Priority: Minor
> Fix For: 4.0, 3.0.x, 3.11.x
>
>
> I have the following schema:
> {code}
> CREATE TABLE query_tests.cart_snapshots (
> cart_id uuid,
> realm text,
> snapshot_id timeuuid,
> state text,
> PRIMARY KEY ((cart_id, realm), snapshot_id)
> ) WITH CLUSTERING ORDER BY (snapshot_id DESC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
> AND comment = ''
> AND compaction = {'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32', 'min_threshold': '4'}
> AND compression = {'chunk_length_in_kb': '64', 'class': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND crc_check_chance = 1.0
> AND dclocal_read_repair_chance = 0.1
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.0
> AND speculative_retry = '99PERCENTILE';
> {code}
> In cqlsh, I try the following query:
> {code}select token(cart_id) from cart_snapshots ;{code}
> This results in cqlsh returning {{ServerError: 
> java.lang.NullPointerException}}, and the following error in the server log:
> {code}
> DC1N1_1  | ERROR [Native-Transport-Requests-1] 2019-01-16 12:17:52,075 
> QueryMessage.java:129 - Unexpected error during query
> DC1N1_1  | java.lang.NullPointerException: null
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:356) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.db.marshal.CompositeType.build(CompositeType.java:349) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.config.CFMetaData.serializePartitionKey(CFMetaData.java:805)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.functions.TokenFct.execute(TokenFct.java:59) 
> ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.ScalarFunctionSelector.getOutput(ScalarFunctionSelector.java:61)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$SelectionWithProcessing$1.getOutputRow(Selection.java:666)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.getOutputRow(Selection.java:492)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.selection.Selection$ResultSetBuilder.newRow(Selection.java:458)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:860)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:790)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:438)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:416)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:289)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:117)
>  ~[apache-cassandra-3.11.3.jar:3.11.3]
> DC1N1_1  |   at 
> org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:224)
>  ~[apache-cassandra-3.11.3.ja

[jira] [Created] (CASSANDRA-14996) Incremental backups can fill up the disk if they are not being uploaded

2019-01-23 Thread Shaurya Gupta (JIRA)
Shaurya Gupta created CASSANDRA-14996:
-

 Summary: Incremental backups can fill up the disk if they are not 
being uploaded
 Key: CASSANDRA-14996
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14996
 Project: Cassandra
  Issue Type: Improvement
Reporter: Shaurya Gupta


This creates a major problem if the script which uploads the snapshots is 
triggered via some API and the application triggering it is some how down and 
is not able to hit the API.

There could be a check in Cassandra that before creating a snapshot it checks 
that the disk has some sufficient (configurable) disk space.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org