[jira] [Commented] (CASSANDRA-20000) Add support for Role's OPTIONS

2024-10-29 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17893922#comment-17893922
 ] 

Jeremiah Jordan commented on CASSANDRA-2:
-

Again, that is for CassandraRoleManager. Any custom role manager implementing 
their use would do its own caching if needed.

> Add support for Role's OPTIONS
> --
>
> Key: CASSANDRA-2
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Semantics, Feature/Authorization
>Reporter: Tiago L. Alves
>Assignee: Tiago L. Alves
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The Cassandra Query Language 
> [https://cassandra.apache.org/doc/stable/cassandra/cql/security.html] / 
> [https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/security.html] 
> specify that a role can have custom options defined as literal map.
> The documentation shows a valid example of these custom options:
> {{CREATE ROLE carlos WITH OPTIONS = \{ 'custom_option1' : 'option1_value', 
> 'custom_option2' : 99 };  }}
> However, the storage/retrieval of such custom options has not been 
> implemented in Cassandra. See for instance, 
> [https://github.com/apache/cassandra/blob/18960d6e3443bf002ef4f46c7f0e1f2ee99734e1/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L393-L396]
> Storing custom options per role could have multiple usages, for instance, it 
> could allow admins to specify fine-grain permissions that can be interpreted 
> by custom authenticator/authorizer.
> The goal of this task is to add support for Role custom options, by storing 
> them in an additional table called {{role_options}} in the {{system_auth}} 
> keyspace.
> Creating a role with options should write the information in both the 
> {{roles}} and the {{role_options}} tables. Creating a role with no options or 
> having an empty map of options should not write any information in the 
> {{role_options}} table.
> Altering a role should behave as follows when executing an {{ALTER ROLE}} 
> statement:
>  * without specifying {{{}OPTIONS{}}}: no changes should be done in the 
> {{role_options}} table. 
>  * specifying {{OPTIONS}} altering a role with no previous custom options: we 
> should insert the custom options in the {{role_options}} table.
>  * specifying {{OPTIONS}} altering a role with previous custom options: we 
> should replace the existent custom options
>  * in the {{role_options}} table.
> Dropping a role should drop information in both {{roles}} and 
> {{{}role_options{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-20000) Add support for Role's OPTIONS

2024-10-29 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17893914#comment-17893914
 ] 

Jeremiah Jordan commented on CASSANDRA-2:
-

We do support this. It is supported when using a custom role manager that does 
something with the options.  If you want to change docs then you should just 
add that OPTIONS are for use is determined by your role manager and that not 
all role managers use them.

> Add support for Role's OPTIONS
> --
>
> Key: CASSANDRA-2
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Semantics, Feature/Authorization
>Reporter: Tiago L. Alves
>Assignee: Tiago L. Alves
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The Cassandra Query Language 
> [https://cassandra.apache.org/doc/stable/cassandra/cql/security.html] / 
> [https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/security.html] 
> specify that a role can have custom options defined as literal map.
> The documentation shows a valid example of these custom options:
> {{CREATE ROLE carlos WITH OPTIONS = \{ 'custom_option1' : 'option1_value', 
> 'custom_option2' : 99 };  }}
> However, the storage/retrieval of such custom options has not been 
> implemented in Cassandra. See for instance, 
> [https://github.com/apache/cassandra/blob/18960d6e3443bf002ef4f46c7f0e1f2ee99734e1/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L393-L396]
> Storing custom options per role could have multiple usages, for instance, it 
> could allow admins to specify fine-grain permissions that can be interpreted 
> by custom authenticator/authorizer.
> The goal of this task is to add support for Role custom options, by storing 
> them in an additional table called {{role_options}} in the {{system_auth}} 
> keyspace.
> Creating a role with options should write the information in both the 
> {{roles}} and the {{role_options}} tables. Creating a role with no options or 
> having an empty map of options should not write any information in the 
> {{role_options}} table.
> Altering a role should behave as follows when executing an {{ALTER ROLE}} 
> statement:
>  * without specifying {{{}OPTIONS{}}}: no changes should be done in the 
> {{role_options}} table. 
>  * specifying {{OPTIONS}} altering a role with no previous custom options: we 
> should insert the custom options in the {{role_options}} table.
>  * specifying {{OPTIONS}} altering a role with previous custom options: we 
> should replace the existent custom options
>  * in the {{role_options}} table.
> Dropping a role should drop information in both {{roles}} and 
> {{{}role_options{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-20000) Add support for Role's OPTIONS

2024-10-25 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17892523#comment-17892523
 ] 

Jeremiah Jordan edited comment on CASSANDRA-2 at 10/24/24 3:14 PM:
---

{quote}Interesting, you think we can provide a patch that does less, leaving 
the system table to be entirely done downstream …?
{quote}
I think this patch is entirely not needed at the moment.  If there were to be a 
feature added to the CassandraRoleManager that used the options, then we would 
want something similar to this patch included with it.

Currently the OPTIONS work as intended when they were added.  They are passed 
through to the role manager specified in the yaml.  If the role manager wants 
to save them/do something with them then it can.  They are there for custom 
downstream role manager implementations to do what they want with.

Since the CassandraRoleManager does not do anything with them, I do not think 
it should start saving them.


was (Author: jjordan):
{quote}Interesting, you think we can provide a patch that does less, leaving 
the system table to be entirely done downstream …?
{quote}
I think this patch is entirely not needed at the moment.  If there were to be a 
feature added to the CassandraRoleManager that used the options, then we would 
want something similar to this patch included with it.

Currently the OPTIONS work as intended when they were added.  They are passed 
through to the role manager specified in the yaml.  If the role manager wants 
to save them/do something with them then it can.  They are there for custom 
downstream role manager implementations to do what they want with.

Since the CassandraRoleManager does not do anything with them, I do not think 
it should start saving.

> Add support for Role's OPTIONS
> --
>
> Key: CASSANDRA-2
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Semantics, Feature/Authorization
>Reporter: Tiago L. Alves
>Assignee: Tiago L. Alves
>Priority: Normal
> Fix For: 5.0.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Cassandra Query Language 
> [https://cassandra.apache.org/doc/stable/cassandra/cql/security.html] / 
> [https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/security.html] 
> specify that a role can have custom options defined as literal map.
> The documentation shows a valid example of these custom options:
> {{CREATE ROLE carlos WITH OPTIONS = \{ 'custom_option1' : 'option1_value', 
> 'custom_option2' : 99 };  }}
> However, the storage/retrieval of such custom options has not been 
> implemented in Cassandra. See for instance, 
> [https://github.com/apache/cassandra/blob/18960d6e3443bf002ef4f46c7f0e1f2ee99734e1/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L393-L396]
> Storing custom options per role could have multiple usages, for instance, it 
> could allow admins to specify fine-grain permissions that can be interpreted 
> by custom authenticator/authorizer.
> The goal of this task is to add support for Role custom options, by storing 
> them in an additional table called {{role_options}} in the {{system_auth}} 
> keyspace.
> Creating a role with options should write the information in both the 
> {{roles}} and the {{role_options}} tables. Creating a role with no options or 
> having an empty map of options should not write any information in the 
> {{role_options}} table.
> Altering a role should behave as follows when executing an {{ALTER ROLE}} 
> statement:
>  * without specifying {{{}OPTIONS{}}}: no changes should be done in the 
> {{role_options}} table. 
>  * specifying {{OPTIONS}} altering a role with no previous custom options: we 
> should insert the custom options in the {{role_options}} table.
>  * specifying {{OPTIONS}} altering a role with previous custom options: we 
> should replace the existent custom options
>  * in the {{role_options}} table.
> Dropping a role should drop information in both {{roles}} and 
> {{{}role_options{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-20000) Add support for Role's OPTIONS

2024-10-25 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17892523#comment-17892523
 ] 

Jeremiah Jordan commented on CASSANDRA-2:
-

{quote}Interesting, you think we can provide a patch that does less, leaving 
the system table to be entirely done downstream …?
{quote}
I think this patch is entirely not needed at the moment.  If there were to be a 
feature added to the CassandraRoleManager that used the options, then we would 
want something similar to this patch included with it.

> Add support for Role's OPTIONS
> --
>
> Key: CASSANDRA-2
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Semantics, Feature/Authorization
>Reporter: Tiago L. Alves
>Assignee: Tiago L. Alves
>Priority: Normal
> Fix For: 5.0.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Cassandra Query Language 
> [https://cassandra.apache.org/doc/stable/cassandra/cql/security.html] / 
> [https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/security.html] 
> specify that a role can have custom options defined as literal map.
> The documentation shows a valid example of these custom options:
> {{CREATE ROLE carlos WITH OPTIONS = \{ 'custom_option1' : 'option1_value', 
> 'custom_option2' : 99 };  }}
> However, the storage/retrieval of such custom options has not been 
> implemented in Cassandra. See for instance, 
> [https://github.com/apache/cassandra/blob/18960d6e3443bf002ef4f46c7f0e1f2ee99734e1/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L393-L396]
> Storing custom options per role could have multiple usages, for instance, it 
> could allow admins to specify fine-grain permissions that can be interpreted 
> by custom authenticator/authorizer.
> The goal of this task is to add support for Role custom options, by storing 
> them in an additional table called {{role_options}} in the {{system_auth}} 
> keyspace.
> Creating a role with options should write the information in both the 
> {{roles}} and the {{role_options}} tables. Creating a role with no options or 
> having an empty map of options should not write any information in the 
> {{role_options}} table.
> Altering a role should behave as follows when executing an {{ALTER ROLE}} 
> statement:
>  * without specifying {{{}OPTIONS{}}}: no changes should be done in the 
> {{role_options}} table. 
>  * specifying {{OPTIONS}} altering a role with no previous custom options: we 
> should insert the custom options in the {{role_options}} table.
>  * specifying {{OPTIONS}} altering a role with previous custom options: we 
> should replace the existent custom options
>  * in the {{role_options}} table.
> Dropping a role should drop information in both {{roles}} and 
> {{{}role_options{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19949) Count performance regression in Cassandra 4.x

2024-10-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17892552#comment-17892552
 ] 

Jeremiah Jordan commented on CASSANDRA-19949:
-

I created two single node clusters with compacted data warmed up by multiple 
runs of count.  Looking at traces for 3.11.17 and 4.0.14 there just seems to be 
a slight performance degradation during each internal page being queried.  
Those small differences add up across all the pages needed to satisfy the query 
and then become significant.

> Count performance regression in Cassandra 4.x
> -
>
> Key: CASSANDRA-19949
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19949
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Romain Anselin
>Priority: Normal
> Fix For: 4.0.x, 4.1.x
>
> Attachments: cass311count.txt, cass311debugcount.txt, 
> cass311trace.txt, cass400trace.txt, cass41count.txt, cass41debugcount.txt, 
> cass41trace.txt, objcount-1.py
>
>
> Cassandra 4 exhibit a severe drop of performance on count operations.
> We created a reproduction workflow inserting a 100k rows of 10kb random string
> After this data is inserted in a 3 nodes cluster at RF3 and queried at LQ, a 
> count on said table takes
> - circa 2s on 3.11
> - consistently more than 10s on 4.0 and 4.1 (around 12 to 13s) - tested 
> 4.0.10 and 4.1.5
> Observation of same program/query against each environment:
> 3.11
> {code:java}
> # COUNT #
> 61a5bcb0-75ca-11ef-9cff-55d571fe1347
> Row count:10
> Count timing with fetch 5000: 0:00:01.846531
> Average row size: 1.0{code}
> 4.1
> {code:java}
> # COUNT #
> 55d79f60-75cb-11ef-a8be-399c3e257132
> Row count:10
> Count timing with fetch 5000: 0:00:13.408626
> Average row size: 1.0{code}
> The UUID shown in the above output is the trace ID on execution of the query 
> which is then exported from each cluster via the command below and provide 
> the cassXXtrace.txt file
> {noformat}
> cqlsh -e show session [trace_id] | tee cassXXtrace.txt{noformat}
> Attached cass311trace.txt and cass41trace.txt which show the associated 
> events from above query.
> Note the issue is way more prevalent in a 3 nodes cluster (I also have tested 
> on docker in one node and it's less visible).
> Attaching objcount.py which contains 2 functions to insert and read the data. 
> The insert is pretty slow due to generating random junk 10k objects but 
> allows to reproduce. Just comment out the gateway_insert function for it to 
> trigger data insert.
> {code:java}
>     # gateway_insert(session, ks, tbl)
>     gateway_query(session, ks, tbl, fetch){code}
> Requires argparse and cassandra driver
> To use, run the following command. Consider uncommenting l.40 and 41 for 
> ks/table creation and l. 155 for insert workload
> {code:java}
> python3 ./objcount.py -i  -k  -t {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-20000) Add support for Role's OPTIONS

2024-10-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17892523#comment-17892523
 ] 

Jeremiah Jordan edited comment on CASSANDRA-2 at 10/24/24 3:13 PM:
---

{quote}Interesting, you think we can provide a patch that does less, leaving 
the system table to be entirely done downstream …?
{quote}
I think this patch is entirely not needed at the moment.  If there were to be a 
feature added to the CassandraRoleManager that used the options, then we would 
want something similar to this patch included with it.

Currently the OPTIONS work as intended when they were added.  They are passed 
through to the role manager specified in the yaml.  If the role manager wants 
to save them/do something with them then it can.  They are there for custom 
downstream role manager implementations to do what they want with.

Since the CassandraRoleManager does not do anything with them, I do not think 
it should start saving.


was (Author: jjordan):
{quote}Interesting, you think we can provide a patch that does less, leaving 
the system table to be entirely done downstream …?
{quote}
I think this patch is entirely not needed at the moment.  If there were to be a 
feature added to the CassandraRoleManager that used the options, then we would 
want something similar to this patch included with it.

> Add support for Role's OPTIONS
> --
>
> Key: CASSANDRA-2
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Semantics, Feature/Authorization
>Reporter: Tiago L. Alves
>Assignee: Tiago L. Alves
>Priority: Normal
> Fix For: 5.0.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Cassandra Query Language 
> [https://cassandra.apache.org/doc/stable/cassandra/cql/security.html] / 
> [https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/security.html] 
> specify that a role can have custom options defined as literal map.
> The documentation shows a valid example of these custom options:
> {{CREATE ROLE carlos WITH OPTIONS = \{ 'custom_option1' : 'option1_value', 
> 'custom_option2' : 99 };  }}
> However, the storage/retrieval of such custom options has not been 
> implemented in Cassandra. See for instance, 
> [https://github.com/apache/cassandra/blob/18960d6e3443bf002ef4f46c7f0e1f2ee99734e1/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L393-L396]
> Storing custom options per role could have multiple usages, for instance, it 
> could allow admins to specify fine-grain permissions that can be interpreted 
> by custom authenticator/authorizer.
> The goal of this task is to add support for Role custom options, by storing 
> them in an additional table called {{role_options}} in the {{system_auth}} 
> keyspace.
> Creating a role with options should write the information in both the 
> {{roles}} and the {{role_options}} tables. Creating a role with no options or 
> having an empty map of options should not write any information in the 
> {{role_options}} table.
> Altering a role should behave as follows when executing an {{ALTER ROLE}} 
> statement:
>  * without specifying {{{}OPTIONS{}}}: no changes should be done in the 
> {{role_options}} table. 
>  * specifying {{OPTIONS}} altering a role with no previous custom options: we 
> should insert the custom options in the {{role_options}} table.
>  * specifying {{OPTIONS}} altering a role with previous custom options: we 
> should replace the existent custom options
>  * in the {{role_options}} table.
> Dropping a role should drop information in both {{roles}} and 
> {{{}role_options{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-20000) Add support for Role's OPTIONS

2024-10-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17892477#comment-17892477
 ] 

Jeremiah Jordan commented on CASSANDRA-2:
-

Until there is a feature that uses this table I don’t think 
CassandraRoleManager should be managing it. The CQL is there for downstream 
custom Auth implementations to be able to use.

> Add support for Role's OPTIONS
> --
>
> Key: CASSANDRA-2
> URL: https://issues.apache.org/jira/browse/CASSANDRA-2
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Semantics, Feature/Authorization
>Reporter: Tiago L. Alves
>Assignee: Tiago L. Alves
>Priority: Normal
> Fix For: 5.0.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The Cassandra Query Language 
> [https://cassandra.apache.org/doc/stable/cassandra/cql/security.html] / 
> [https://cassandra.apache.org/doc/5.0/cassandra/developing/cql/security.html] 
> specify that a role can have custom options defined as literal map.
> The documentation shows a valid example of these custom options:
> {{CREATE ROLE carlos WITH OPTIONS = \{ 'custom_option1' : 'option1_value', 
> 'custom_option2' : 99 };  }}
> However, the storage/retrieval of such custom options has not been 
> implemented in Cassandra. See for instance, 
> [https://github.com/apache/cassandra/blob/18960d6e3443bf002ef4f46c7f0e1f2ee99734e1/src/java/org/apache/cassandra/auth/CassandraRoleManager.java#L393-L396]
> Storing custom options per role could have multiple usages, for instance, it 
> could allow admins to specify fine-grain permissions that can be interpreted 
> by custom authenticator/authorizer.
> The goal of this task is to add support for Role custom options, by storing 
> them in an additional table called {{role_options}} in the {{system_auth}} 
> keyspace.
> Creating a role with options should write the information in both the 
> {{roles}} and the {{role_options}} tables. Creating a role with no options or 
> having an empty map of options should not write any information in the 
> {{role_options}} table.
> Altering a role should behave as follows when executing an {{ALTER ROLE}} 
> statement:
>  * without specifying {{{}OPTIONS{}}}: no changes should be done in the 
> {{role_options}} table. 
>  * specifying {{OPTIONS}} altering a role with no previous custom options: we 
> should insert the custom options in the {{role_options}} table.
>  * specifying {{OPTIONS}} altering a role with previous custom options: we 
> should replace the existent custom options
>  * in the {{role_options}} table.
> Dropping a role should drop information in both {{roles}} and 
> {{{}role_options{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19979) Use internal buffer on streaming slow path

2024-10-04 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17886974#comment-17886974
 ] 

Jeremiah Jordan commented on CASSANDRA-19979:
-

I don't know where such a change would be made for either ticket, but given 
CASSANDRA-15452 is still in OPEN would it make more sense to update that to 
include streaming as well?  VS having an entirely different ticket tracking the 
change for the streaming path?  I can only assume the change is basically the 
same in both places?

> Use internal buffer on streaming slow path
> --
>
> Key: CASSANDRA-19979
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19979
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jon Haddad
>Priority: Normal
> Attachments: image-2024-10-04-12-40-26-727.png
>
>
> CASSANDRA-15452 is introducing an internal buffer to compaction in order to 
> increase throughput while reducing IOPS.  We can do the same thing with our 
> streaming slow path.  There's a common misconception that the overhead comes 
> from serde overhead, but I've found on a lot of devices the overhead is due 
> to our read patterns. This is most commonly found on non-NVMe drives, 
> especially disaggregated storage such as EBS where the latency is higher and 
> more variable.
> Attached is a perf profile showing the cost of streaming is dominated by 
> pread.  The team I was working with was seeing they could stream only 12MB 
> per streaming session.  Reducing the number of read operations by using 
> internal buffered reads should improve this by at least 3-5x, as well as 
> reduce CPU overhead from reduced system calls.
>  
>  
>  
>  
>  
> !image-2024-10-04-12-40-26-727.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19779) direct IO support is always evaluated to false upon the very first start of a node

2024-07-26 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19779:

Fix Version/s: 5.0-rc2
   (was: 5.x)

> direct IO support is always evaluated to false upon the very first start of a 
> node
> --
>
> Key: CASSANDRA-19779
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19779
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Tools
>Reporter: Stefan Miklosovic
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 5.0-rc2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I extract the distribution tarball and I want to use tools in tools/bin, 
> there is this warn log visible every time for tools when they are started 
> (does not happen on "help" command, obviously)
> {code:java}
> WARN  14:25:11,835 Unable to determine block size for commit log directory: 
> null {code}
> This is because we introduced this (1) in CASSANDRA-18464
> What that does is that it will go and try to create a temporary file in 
> commit log directory to get "block size" for a "file store" that file is in.
> The problem with that is that when we just extract a tarball and run the 
> tools - Cassandra was never started - then such commit log directory does not 
> exist yet, so it tries to create a temporary file in a non-existing 
> directory, which fails, hence the log message.
> The fix is to check if commitlog dir exists and return / skip the resolution 
> of block size if it does not.
> Another approach might be to check if this is executed in the context of a 
> tool and skip it from resolution altogether. The problem with this is that 
> not all tools we have in bin/log call DatabaseDescriptor.
> toolInitialization() so we might combine these two.
> (1) 
> [https://github.com/apache/cassandra/blob/cassandra-5.0/src/java/org/apache/cassandra/config/DatabaseDescriptor.java#L1455-L1462]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19668) SIGSEGV originating in Paxos V2 Scheduled Task

2024-07-02 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17862494#comment-17862494
 ] 

Jeremiah Jordan commented on CASSANDRA-19668:
-

Thanks for finding and fixing this. Should we add an in tree test to ensure 
this can’t regress?

> SIGSEGV originating in Paxos V2 Scheduled Task
> --
>
> Key: CASSANDRA-19668
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19668
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Lightweight Transactions
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Urgent
> Fix For: 4.1.6, 5.0-beta2, 5.0, 5.1
>
> Attachments: ci_summar-5.0.html, ci_summary-4.1.html, 
> ci_summary-trunk.html
>
>
> I haven't gotten to the root cause of this yet. Several 4.1 nodes have 
> crashed in in production.  I'm not sure if this is related to Paxos v2 or 
> not, but it is enabled.  offheap_objects also enabled. 
> I'm not sure if this affects 5.0, yet.
> Most of the crashes don't have a stacktrace - they only reference this
> {noformat}
> Stack: [0x7fabf4c34000,0x7fabf4d34000],  sp=0x7fabf4d31f00,  free 
> space=1015k
> Native frames: (J=compiled Java code, A=aot compiled Java code, 
> j=interpreted, Vv=VM code, C=native code)
> v  ~StubRoutines::jint_disjoint_arraycopy
> {noformat}
> They all are in the {{ScheduledTasks}} thread.
> However, one node does have this in the crash log:
> {noformat}
> ---  T H R E A D  ---
> Current thread (0x78b375eac800):  JavaThread "ScheduledTasks:1" daemon 
> [_thread_in_Java, id=151791, stack(0x78b34b78,0x78b34b88)]
> Stack: [0x78b34b78,0x78b34b88],  sp=0x78b34b87c350,  free 
> space=1008k
> Native frames: (J=compiled Java code, A=aot compiled Java code, 
> j=interpreted, Vv=VM code, C=native code)
> J 29467 c2 
> org.apache.cassandra.db.rows.AbstractCell.clone(Lorg/apache/cassandra/utils/memory/ByteBufferCloner;)Lorg/apache/cassandra/db/rows/Cell;
>  (50 bytes) @ 0x78b3dd40a42f [0x78b3dd409de0+0x064f]
> J 17669 c2 
> org.apache.cassandra.db.rows.Cell.clone(Lorg/apache/cassandra/utils/memory/Cloner;)Lorg/apache/cassandra/db/rows/ColumnData;
>  (6 bytes) @ 0x78b3dc54edc0 [0x78b3dc54ed40+0x0080]
> J 17816 c2 
> org.apache.cassandra.db.rows.BTreeRow$$Lambda$845.apply(Ljava/lang/Object;)Ljava/lang/Object;
>  (12 bytes) @ 0x78b3dbed01a4 [0x78b3dbed0120+0x0084]
> J 17828 c2 
> org.apache.cassandra.utils.btree.BTree.transform([Ljava/lang/Object;Ljava/util/function/Function;)[Ljava/lang/Object;
>  (194 bytes) @ 0x78b3dc5f35f0 [0x78b3dc5f34a0+0x0150]
> J 35096 c2 
> org.apache.cassandra.db.rows.BTreeRow.clone(Lorg/apache/cassandra/utils/memory/Cloner;)Lorg/apache/cassandra/db/rows/Row;
>  (37 bytes) @ 0x78b3dda9111c [0x78b3dda90fe0+0x013c]
> J 30500 c2 
> org.apache.cassandra.utils.memory.EnsureOnHeap$CloneToHeap.applyToRow(Lorg/apache/cassandra/db/rows/Row;)Lorg/apache/cassandra/db/rows/Row;
>  (16 bytes) @ 0x78b3dd59b91c [0x78b3dd59b8c0+0x005c]
> J 26498 c2 org.apache.cassandra.db.transform.BaseRows.hasNext()Z (215 bytes) 
> @ 0x78b3dcf1c454 [0x78b3dcf1c180+0x02d4]
> J 30775 c2 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
>  (49 bytes) @ 0x78b3dc789020 [0x78b3dc788fc0+0x0060]
> J 9082 c2 org.apache.cassandra.utils.AbstractIterator.hasNext()Z (80 bytes) @ 
> 0x78b3dbb3c544 [0x78b3dbb3c440+0x0104]
> J 35593 c2 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows$PaxosMemtableToKeyStateIterator.computeNext()Lorg/apache/cassandra/service/paxos/uncommitted/PaxosKeyState;
>  (126 bytes) @ 0x78b3dc7ceeec [0x78b3dc7cee20+0x00cc]
> J 35591 c2 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows$PaxosMemtableToKeyStateIterator.computeNext()Ljava/lang/Object;
>  (5 bytes) @ 0x78b3dc7d09e4 [0x78b3dc7d09a0+0x0044]
> J 9082 c2 org.apache.cassandra.utils.AbstractIterator.hasNext()Z (80 bytes) @ 
> 0x78b3dbb3c544 [0x78b3dbb3c440+0x0104]
> J 34146 c2 
> com.google.common.collect.Iterators.addAll(Ljava/util/Collection;Ljava/util/Iterator;)Z
>  (41 bytes) @ 0x78b3dd9197e8 [0x78b3dd919680+0x0168]
> J 38256 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows.toIterator(Lorg/apache/cassandra/db/partitions/UnfilteredPartitionIterator;Lorg/apache/cassandra/schema/TableId;Z)Lorg/apache/cassandra/utils/CloseableIterator;
>  (49 bytes) @ 0x78b3d6b677ac [0x78b3d6b672e0+0x04cc]
> J 34823 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosUncommittedIndex.repair

[jira] [Assigned] (CASSANDRA-19668) SIGSEV origininating in Paxos Scheduled Task

2024-06-27 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reassigned CASSANDRA-19668:
---

Assignee: Jon Haddad

> SIGSEV origininating in Paxos Scheduled Task
> 
>
> Key: CASSANDRA-19668
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19668
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Lightweight Transactions
>Reporter: Jon Haddad
>Assignee: Jon Haddad
>Priority: Urgent
>
> I haven't gotten to the root cause of this yet. Several 4.1 nodes have 
> crashed in in production.  I'm not sure if this is related to Paxos v2 or 
> not, but it is enabled.  offheap_objects also enabled. 
> I'm not sure if this affects 5.0, yet.
> Most of the crashes don't have a stacktrace - they only reference this
> {noformat}
> Stack: [0x7fabf4c34000,0x7fabf4d34000],  sp=0x7fabf4d31f00,  free 
> space=1015k
> Native frames: (J=compiled Java code, A=aot compiled Java code, 
> j=interpreted, Vv=VM code, C=native code)
> v  ~StubRoutines::jint_disjoint_arraycopy
> {noformat}
> They all are in the {{ScheduledTasks}} thread.
> However, one node does have this in the crash log:
> {noformat}
> ---  T H R E A D  ---
> Current thread (0x78b375eac800):  JavaThread "ScheduledTasks:1" daemon 
> [_thread_in_Java, id=151791, stack(0x78b34b78,0x78b34b88)]
> Stack: [0x78b34b78,0x78b34b88],  sp=0x78b34b87c350,  free 
> space=1008k
> Native frames: (J=compiled Java code, A=aot compiled Java code, 
> j=interpreted, Vv=VM code, C=native code)
> J 29467 c2 
> org.apache.cassandra.db.rows.AbstractCell.clone(Lorg/apache/cassandra/utils/memory/ByteBufferCloner;)Lorg/apache/cassandra/db/rows/Cell;
>  (50 bytes) @ 0x78b3dd40a42f [0x78b3dd409de0+0x064f]
> J 17669 c2 
> org.apache.cassandra.db.rows.Cell.clone(Lorg/apache/cassandra/utils/memory/Cloner;)Lorg/apache/cassandra/db/rows/ColumnData;
>  (6 bytes) @ 0x78b3dc54edc0 [0x78b3dc54ed40+0x0080]
> J 17816 c2 
> org.apache.cassandra.db.rows.BTreeRow$$Lambda$845.apply(Ljava/lang/Object;)Ljava/lang/Object;
>  (12 bytes) @ 0x78b3dbed01a4 [0x78b3dbed0120+0x0084]
> J 17828 c2 
> org.apache.cassandra.utils.btree.BTree.transform([Ljava/lang/Object;Ljava/util/function/Function;)[Ljava/lang/Object;
>  (194 bytes) @ 0x78b3dc5f35f0 [0x78b3dc5f34a0+0x0150]
> J 35096 c2 
> org.apache.cassandra.db.rows.BTreeRow.clone(Lorg/apache/cassandra/utils/memory/Cloner;)Lorg/apache/cassandra/db/rows/Row;
>  (37 bytes) @ 0x78b3dda9111c [0x78b3dda90fe0+0x013c]
> J 30500 c2 
> org.apache.cassandra.utils.memory.EnsureOnHeap$CloneToHeap.applyToRow(Lorg/apache/cassandra/db/rows/Row;)Lorg/apache/cassandra/db/rows/Row;
>  (16 bytes) @ 0x78b3dd59b91c [0x78b3dd59b8c0+0x005c]
> J 26498 c2 org.apache.cassandra.db.transform.BaseRows.hasNext()Z (215 bytes) 
> @ 0x78b3dcf1c454 [0x78b3dcf1c180+0x02d4]
> J 30775 c2 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
>  (49 bytes) @ 0x78b3dc789020 [0x78b3dc788fc0+0x0060]
> J 9082 c2 org.apache.cassandra.utils.AbstractIterator.hasNext()Z (80 bytes) @ 
> 0x78b3dbb3c544 [0x78b3dbb3c440+0x0104]
> J 35593 c2 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows$PaxosMemtableToKeyStateIterator.computeNext()Lorg/apache/cassandra/service/paxos/uncommitted/PaxosKeyState;
>  (126 bytes) @ 0x78b3dc7ceeec [0x78b3dc7cee20+0x00cc]
> J 35591 c2 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows$PaxosMemtableToKeyStateIterator.computeNext()Ljava/lang/Object;
>  (5 bytes) @ 0x78b3dc7d09e4 [0x78b3dc7d09a0+0x0044]
> J 9082 c2 org.apache.cassandra.utils.AbstractIterator.hasNext()Z (80 bytes) @ 
> 0x78b3dbb3c544 [0x78b3dbb3c440+0x0104]
> J 34146 c2 
> com.google.common.collect.Iterators.addAll(Ljava/util/Collection;Ljava/util/Iterator;)Z
>  (41 bytes) @ 0x78b3dd9197e8 [0x78b3dd919680+0x0168]
> J 38256 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows.toIterator(Lorg/apache/cassandra/db/partitions/UnfilteredPartitionIterator;Lorg/apache/cassandra/schema/TableId;Z)Lorg/apache/cassandra/utils/CloseableIterator;
>  (49 bytes) @ 0x78b3d6b677ac [0x78b3d6b672e0+0x04cc]
> J 34823 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosUncommittedIndex.repairIterator(Lorg/apache/cassandra/schema/TableId;Ljava/util/Collection;)Lorg/apache/cassandra/utils/CloseableIterator;
>  (212 bytes) @ 0x78b3d5675e0c [0x78b3d5673be0+0x222c]
> J 38259 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosUncommitte

[jira] [Updated] (CASSANDRA-19668) SIGSEV origininating in Paxos Scheduled Task

2024-06-27 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19668:

Complexity: Normal
Status: Open  (was: Triage Needed)

> SIGSEV origininating in Paxos Scheduled Task
> 
>
> Key: CASSANDRA-19668
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19668
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Lightweight Transactions
>Reporter: Jon Haddad
>Priority: Urgent
>
> I haven't gotten to the root cause of this yet. Several 4.1 nodes have 
> crashed in in production.  I'm not sure if this is related to Paxos v2 or 
> not, but it is enabled.  offheap_objects also enabled. 
> I'm not sure if this affects 5.0, yet.
> Most of the crashes don't have a stacktrace - they only reference this
> {noformat}
> Stack: [0x7fabf4c34000,0x7fabf4d34000],  sp=0x7fabf4d31f00,  free 
> space=1015k
> Native frames: (J=compiled Java code, A=aot compiled Java code, 
> j=interpreted, Vv=VM code, C=native code)
> v  ~StubRoutines::jint_disjoint_arraycopy
> {noformat}
> They all are in the {{ScheduledTasks}} thread.
> However, one node does have this in the crash log:
> {noformat}
> ---  T H R E A D  ---
> Current thread (0x78b375eac800):  JavaThread "ScheduledTasks:1" daemon 
> [_thread_in_Java, id=151791, stack(0x78b34b78,0x78b34b88)]
> Stack: [0x78b34b78,0x78b34b88],  sp=0x78b34b87c350,  free 
> space=1008k
> Native frames: (J=compiled Java code, A=aot compiled Java code, 
> j=interpreted, Vv=VM code, C=native code)
> J 29467 c2 
> org.apache.cassandra.db.rows.AbstractCell.clone(Lorg/apache/cassandra/utils/memory/ByteBufferCloner;)Lorg/apache/cassandra/db/rows/Cell;
>  (50 bytes) @ 0x78b3dd40a42f [0x78b3dd409de0+0x064f]
> J 17669 c2 
> org.apache.cassandra.db.rows.Cell.clone(Lorg/apache/cassandra/utils/memory/Cloner;)Lorg/apache/cassandra/db/rows/ColumnData;
>  (6 bytes) @ 0x78b3dc54edc0 [0x78b3dc54ed40+0x0080]
> J 17816 c2 
> org.apache.cassandra.db.rows.BTreeRow$$Lambda$845.apply(Ljava/lang/Object;)Ljava/lang/Object;
>  (12 bytes) @ 0x78b3dbed01a4 [0x78b3dbed0120+0x0084]
> J 17828 c2 
> org.apache.cassandra.utils.btree.BTree.transform([Ljava/lang/Object;Ljava/util/function/Function;)[Ljava/lang/Object;
>  (194 bytes) @ 0x78b3dc5f35f0 [0x78b3dc5f34a0+0x0150]
> J 35096 c2 
> org.apache.cassandra.db.rows.BTreeRow.clone(Lorg/apache/cassandra/utils/memory/Cloner;)Lorg/apache/cassandra/db/rows/Row;
>  (37 bytes) @ 0x78b3dda9111c [0x78b3dda90fe0+0x013c]
> J 30500 c2 
> org.apache.cassandra.utils.memory.EnsureOnHeap$CloneToHeap.applyToRow(Lorg/apache/cassandra/db/rows/Row;)Lorg/apache/cassandra/db/rows/Row;
>  (16 bytes) @ 0x78b3dd59b91c [0x78b3dd59b8c0+0x005c]
> J 26498 c2 org.apache.cassandra.db.transform.BaseRows.hasNext()Z (215 bytes) 
> @ 0x78b3dcf1c454 [0x78b3dcf1c180+0x02d4]
> J 30775 c2 
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext()Ljava/lang/Object;
>  (49 bytes) @ 0x78b3dc789020 [0x78b3dc788fc0+0x0060]
> J 9082 c2 org.apache.cassandra.utils.AbstractIterator.hasNext()Z (80 bytes) @ 
> 0x78b3dbb3c544 [0x78b3dbb3c440+0x0104]
> J 35593 c2 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows$PaxosMemtableToKeyStateIterator.computeNext()Lorg/apache/cassandra/service/paxos/uncommitted/PaxosKeyState;
>  (126 bytes) @ 0x78b3dc7ceeec [0x78b3dc7cee20+0x00cc]
> J 35591 c2 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows$PaxosMemtableToKeyStateIterator.computeNext()Ljava/lang/Object;
>  (5 bytes) @ 0x78b3dc7d09e4 [0x78b3dc7d09a0+0x0044]
> J 9082 c2 org.apache.cassandra.utils.AbstractIterator.hasNext()Z (80 bytes) @ 
> 0x78b3dbb3c544 [0x78b3dbb3c440+0x0104]
> J 34146 c2 
> com.google.common.collect.Iterators.addAll(Ljava/util/Collection;Ljava/util/Iterator;)Z
>  (41 bytes) @ 0x78b3dd9197e8 [0x78b3dd919680+0x0168]
> J 38256 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosRows.toIterator(Lorg/apache/cassandra/db/partitions/UnfilteredPartitionIterator;Lorg/apache/cassandra/schema/TableId;Z)Lorg/apache/cassandra/utils/CloseableIterator;
>  (49 bytes) @ 0x78b3d6b677ac [0x78b3d6b672e0+0x04cc]
> J 34823 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosUncommittedIndex.repairIterator(Lorg/apache/cassandra/schema/TableId;Ljava/util/Collection;)Lorg/apache/cassandra/utils/CloseableIterator;
>  (212 bytes) @ 0x78b3d5675e0c [0x78b3d5673be0+0x222c]
> J 38259 c1 
> org.apache.cassandra.service.paxos.uncommitted.PaxosUncommitte

[jira] [Commented] (CASSANDRA-19675) Avoid streams in the common case for UpdateTransaction creation

2024-06-14 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17855085#comment-17855085
 ] 

Jeremiah Jordan commented on CASSANDRA-19675:
-

LGTM. +1

> Avoid streams in the common case for UpdateTransaction creation
> ---
>
> Key: CASSANDRA-19675
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19675
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/SAI
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
> Attachments: ci_summary.html, new_update_txn_streams.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Some recent Accord profiling highlighted some easily addressable inefficiency 
> in the way we create new {{UpdateTransaction}} objects in 
> {{SecondaryIndexManager}} that have existed since the introduction of index 
> groups for SAI. We should be able to clean this up by avoiding stream 
> creation or even iteration over the groups when there is a single index 
> group, which is going to be the most common case with SAI anyway. If we do 
> have to iterate, there should also be no reason to copy the collection of 
> index groups via {{listIndexGroups()}}, although that copying can remain in 
> the method itself for external callers.
>  !new_update_txn_streams.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16364) Joining nodes simultaneously with auto_bootstrap:false can cause token collision

2024-05-17 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847464#comment-17847464
 ] 

Jeremiah Jordan commented on CASSANDRA-16364:
-

The deterministic nature is a feature that I know of many people relying on for 
the token allocation. It lets you setup duplicate clusters easily to restore 
backups to among other things.

any change to that should be behind a flag, and the default should only change 
in a new major.

> Joining nodes simultaneously with auto_bootstrap:false can cause token 
> collision
> 
>
> Key: CASSANDRA-16364
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16364
> Project: Cassandra
>  Issue Type: Bug
>  Components: Cluster/Membership
>Reporter: Paulo Motta
>Priority: Normal
> Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x
>
>
> While raising a 6-node ccm cluster to test 4.0-beta4, 2 nodes chosen the same 
> tokens using the default {{allocate_tokens_for_local_rf}}. However they both 
> succeeded bootstrap with colliding tokens.
> We were familiar with this issue from CASSANDRA-13701 and CASSANDRA-16079, 
> and the workaround to fix this is to avoid parallel bootstrap when using 
> {{allocate_tokens_for_local_rf}}.
> However, since this is the default behavior, we should try to detect and 
> prevent this situation when possible, since it can break users relying on 
> parallel bootstrap behavior.
> I think we could prevent this as following:
> 1. announce intent to bootstrap via gossip (ie. add node on gossip without 
> token information)
> 2. wait for gossip to settle for a longer period (ie. ring delay)
> 3. allocate tokens (if multiple bootstrap attempts are detected, tie break 
> via node-id)
> 4. broadcast tokens and move on with bootstrap



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19617) Paxos may re-distribute stale commits that predate a collectable tombstone

2024-05-16 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17847118#comment-17847118
 ] 

Jeremiah Jordan commented on CASSANDRA-19617:
-

Where are we at with this?  It is the final ticket blocking releases at the 
moment.

> Paxos may re-distribute stale commits that predate a collectable tombstone
> --
>
> Key: CASSANDRA-19617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19617
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Benedict Elliott Smith
>Assignee: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.1.x, 5.0-rc, 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Note: this bug only affects {{paxos_state_purging: {gc_grace, repaired}}}, 
> i.e. those introduced alongside Paxos v2.
> There are two problems:
> 1) Purging is applied only on compaction, not on load, which can lead to very 
> old commits being resurfaced in certain circumstances
> 2) PaxosPrepare does not filter commits based on paxos repair low bound
> This permits surprising situations to arise, where some replicas purge a 
> stale commit _and all newer commits_, but due to compaction peculiarities 
> some other replica may purge only the newer commits, leaving a stale commit 
> in some compaction "purgatory"\[1] to be returned to reads indefinitely. 
> So long as there are no newer commits, the paxos coordinator will see this 
> commit is not universally known and redistribute it - no matter how old it 
> is. This can permit an insert to be reapplied after GC grace has elapsed and 
> the tombstone has been collected.
> For proposals this is not a problem, as we correctly filter proposals based 
> on the last paxos repair time. This also does not affect clusters with the 
> legacy (and default) paxos state purging using TTL. Problem (1) only applies 
> also to the new {{gc_grace}} compatibility mode for purging.
> \[1] Compaction purgatory can arise for instance because paxos purging allows 
> whole sstables to be erased quite effectively, and if this is able to 
> ordinarily prevent sstables being promoted to L1, then if for some abnormal 
> reason sstables reach L1 (e.g. repairs being disabled for some time), those 
> that collect may remain uncompacted for an extended period without purging 
> being applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19617) Paxos may re-distribute stale commits that predate a collectable tombstone

2024-05-16 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19617:

Test and Documentation Plan: Patch includes tests
 Status: Patch Available  (was: Open)

> Paxos may re-distribute stale commits that predate a collectable tombstone
> --
>
> Key: CASSANDRA-19617
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19617
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Benedict Elliott Smith
>Assignee: Benedict Elliott Smith
>Priority: Normal
> Fix For: 4.1.x, 5.0-rc, 5.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Note: this bug only affects {{paxos_state_purging: {gc_grace, repaired}}}, 
> i.e. those introduced alongside Paxos v2.
> There are two problems:
> 1) Purging is applied only on compaction, not on load, which can lead to very 
> old commits being resurfaced in certain circumstances
> 2) PaxosPrepare does not filter commits based on paxos repair low bound
> This permits surprising situations to arise, where some replicas purge a 
> stale commit _and all newer commits_, but due to compaction peculiarities 
> some other replica may purge only the newer commits, leaving a stale commit 
> in some compaction "purgatory"\[1] to be returned to reads indefinitely. 
> So long as there are no newer commits, the paxos coordinator will see this 
> commit is not universally known and redistribute it - no matter how old it 
> is. This can permit an insert to be reapplied after GC grace has elapsed and 
> the tombstone has been collected.
> For proposals this is not a problem, as we correctly filter proposals based 
> on the last paxos repair time. This also does not affect clusters with the 
> legacy (and default) paxos state purging using TTL. Problem (1) only applies 
> also to the new {{gc_grace}} compatibility mode for purging.
> \[1] Compaction purgatory can arise for instance because paxos purging allows 
> whole sstables to be erased quite effectively, and if this is able to 
> ordinarily prevent sstables being promoted to L1, then if for some abnormal 
> reason sstables reach L1 (e.g. repairs being disabled for some time), those 
> that collect may remain uncompacted for an extended period without purging 
> being applied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16717) The fetching strategy can be optimized for CL.ONE and CL.LOCAL_ONE

2024-05-13 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17846006#comment-17846006
 ] 

Jeremiah Jordan commented on CASSANDRA-16717:
-

It’s not just about resolving multiple nodes, it is also about knowing if the 
row exists on the current node such that you are returning null (row exists, 
column does not) or returning nothing (row does not exist).

There should be ways to better optimize the ONE case that only queries some 
columns, but it’s definitely not as simple as just setting the column filter.

> The fetching strategy can be optimized for CL.ONE and CL.LOCAL_ONE
> --
>
> Key: CASSANDRA-16717
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16717
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Assignee: Benjamin Lerer
>Priority: Normal
>
> The current {{ColumnFilter}} fetching strategy has been implemented in order 
> to guaranty the CQL semantics around empty vs non-existing rows. It has also 
> some importance regarding read-repair (CASSANDRA-16710). Nevertheless, reads 
> at {{CL.ONE}} and at {{CL.LOCAL_ONE}} do not use read-repair and do not 
> require more columns than the queried ones as those cannot be deleted by 
> column deletions present on other nodes. By consequence, having 
> {{ColumnFilter}} fetching only the queried columns should improve query 
> speed, by reducing the number of SSTable reads for queries fetching specific 
> rows and reducing the amound of data serialized and transfered between nodes 
> (if the data is not local). 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19493) Optionally fail writes when SAI refuses to index a term value exceeding configured term max size

2024-04-09 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835563#comment-17835563
 ] 

Jeremiah Jordan commented on CASSANDRA-19493:
-

I would suggest we have warn and fail thresholds for the sizes. warn but accept 
the write at some threshold and fail the write completely at the failure 
threshold.

> Optionally fail writes when SAI refuses to index a term value exceeding 
> configured term max size
> 
>
> Key: CASSANDRA-19493
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19493
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/SAI
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0-rc, 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> SAI currently emits a client warning when we try to index a text value larger 
> than {{{}cassandra.sai.max_string_term_size{}}}. It might be nice to have a 
> hard limit as well, above which we can reject the mutation entirely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19007) Queries with multi-column replica-side filtering can miss rows

2024-04-05 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19007:

Severity: Normal  (was: Critical)

> Queries with multi-column replica-side filtering can miss rows
> --
>
> Key: CASSANDRA-19007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19007
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Andres de la Peña
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{SELECT}} queries with multi-column replica-side filtering can miss rows if 
> the filtered columns are spread across out-of-sync replicas. This dtest 
> reproduces the issue:
> {code:java}
> @Test
> public void testMultiColumnReplicaSideFiltering() throws IOException
> {
> try (Cluster cluster = init(Cluster.build().withNodes(2).start()))
> {
> cluster.schemaChange(withKeyspace("CREATE TABLE %s.t (k int PRIMARY 
> KEY, a int, b int)"));
> // insert a split row
> cluster.get(1).executeInternal(withKeyspace("INSERT INTO %s.t(k, a) 
> VALUES (0, 1)"));
> cluster.get(2).executeInternal(withKeyspace("INSERT INTO %s.t(k, b) 
> VALUES (0, 2)"));
> String select = withKeyspace("SELECT * FROM %s.t WHERE a = 1 AND b = 
> 2 ALLOW FILTERING");
> Object[][] initialRows = cluster.coordinator(1).execute(select, ALL);
> assertRows(initialRows, row(0, 1, 2)); // not found!!
> }
> }
> {code}
> This edge case affects queries using {{ALLOW FILTERING}} or any index 
> implementation.
> It affects all branches since multi-column replica-side filtering queries 
> were introduced, long before 3.0.
> The protection mechanism added by CASSANDRA-8272/8273 won't deal with this 
> case, since it only solves single-column conflicts where stale rows could 
> resurrect. This bug however doesn't resurrect data, it can only miss rows 
> while the replicas are out-of-sync.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17044) Refactor schema management to allow for schema source pluggability

2024-04-03 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833527#comment-17833527
 ] 

Jeremiah Jordan commented on CASSANDRA-17044:
-

Yes it was part of CEP-18 
https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=95652201#content/view/191335397

> Refactor schema management to allow for schema source pluggability
> --
>
> Key: CASSANDRA-17044
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17044
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Schema
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1-alpha1, 4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The idea is decompose `Schema` into separate entities responsible for 
> different things. In particular extract what is related to schema storage and 
> synchronization into a separate class so that it is possible to create an 
> extension point there and store schema in a different way than 
> `system_schema` keyspace, for example in etcd. 
> This would also simplify the logic and reduce the number of special cases, 
> make all the things more testable and the logic of internal classes 
> encapsulated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19489) Guardrail to warn clients about possible transient incorrect responses for filtering queries against multiple mutable columns

2024-04-01 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17832864#comment-17832864
 ] 

Jeremiah Jordan commented on CASSANDRA-19489:
-

This is a long standing issue in C*, not sure we need to block 5.0 release for 
adding a warning?  I think we can add a warning when ever?

> Guardrail to warn clients about possible transient incorrect responses for 
> filtering queries against multiple mutable columns
> -
>
> Key: CASSANDRA-19489
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19489
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Consistency/Coordination, CQL/Semantics, Messaging/Client
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0-rc, 5.1
>
>
> Given we may not have time to fully resolve CASSANDRA-19007 before we release 
> 5.0, it would still be helpful to have, at the very minimum, a client warning 
> for cases where a user filters on two or more mutable (static or regular) 
> columns at consistency levels that require coordinator reconciliation. We may 
> also want the option to fail these queries outright, although that need not 
> be the default.
> The only art involved in this is deciding what we want to say in the 
> warning/error message. It's probably reasonable to mention there that this 
> only happens when we have unrepaired data. It's also worth noting that SAI 
> queries are no longer vulnerable to this after the resolution of 
> CASSANDRA-19018.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19327) Test Failure: org.apache.cassandra.index.sai.cql.RandomIntersectionTest.randomIntersectionTest[Small partition restricted high high]-system_keyspace_directory_jdk17

2024-03-04 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19327:

Resolution: Not A Problem
Status: Resolved  (was: Open)

> Test Failure: 
> org.apache.cassandra.index.sai.cql.RandomIntersectionTest.randomIntersectionTest[Small
>  partition restricted high high]-system_keyspace_directory_jdk17
> 
>
> Key: CASSANDRA-19327
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19327
> Project: Cassandra
>  Issue Type: Bug
>  Components: CI
>Reporter: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 5.0-rc, 5.0.x, 5.x
>
>
> Seen here: 
> [https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2629/workflows/75f57272-f299-40e1-8e4f-fdb75bca2f7c/jobs/53821/tests]
> The tests were run with _memtable_allocation_type: heap_buffers_ (By default 
> we run them now with offheap_objects, to be changed in CASSANDRA-19326)
> {code:java}
> junit.framework.AssertionFailedError: Got less rows than expected. Expected 
> 16 but got 0 at 
> org.apache.cassandra.cql3.CQLTester.assertRows(CQLTester.java:1880) at 
> org.apache.cassandra.index.sai.cql.RandomIntersectionTest.lambda$runRestrictedQueries$3(RandomIntersectionTest.java:118)
>  at 
> org.apache.cassandra.cql3.CQLTester.beforeAndAfterFlush(CQLTester.java:2269) 
> at 
> org.apache.cassandra.index.sai.cql.RandomIntersectionTest.runRestrictedQueries(RandomIntersectionTest.java:104)
>  at 
> org.apache.cassandra.index.sai.cql.RandomIntersectionTest.randomIntersectionTest(RandomIntersectionTest.java:95)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>  at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36){code}
> CC [~maedhroz] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19336) Repair causes out of memory

2024-02-26 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19336:

Fix Version/s: 4.0.13
   (was: 4.0.x)

> Repair causes out of memory
> ---
>
> Key: CASSANDRA-19336
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19336
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
> Fix For: 4.0.13, 4.1.5, 5.0-beta2, 5.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CASSANDRA-14096 introduced {{repair_session_space}} as a limit for the memory 
> usage for Merkle tree calculations during repairs. This limit is applied to 
> the set of Merkle trees built for a received validation request 
> ({{{}VALIDATION_REQ{}}}), divided by the replication factor so as not to 
> overwhelm the repair coordinator, who will have requested RF sets of Merkle 
> trees. That way the repair coordinator should only use 
> {{repair_session_space}} for the RF Merkle trees.
> However, a repair session without {{{}-pr-{}}}/{{{}-partitioner-range{}}} 
> will send RF*RF validation requests, because the repair coordinator node has 
> RF-1 replicas and is also the replica of RF-1 nodes. Since all the requests 
> are sent at the same time, at some point the repair coordinator can have up 
> to RF*{{{}repair_session_space{}}} worth of Merkle trees if none of the 
> validation responses is fully processed before the last response arrives.
> Even worse, if the cluster uses virtual nodes, many nodes can be replicas of 
> the repair coordinator, and some nodes can be replicas of multiple token 
> ranges. It would mean that the repair coordinator can send more than RF or 
> RF*RF simultaneous validation requests.
> For example, in an 11-node cluster with RF=3 and 256 tokens, we have seen a 
> repair session involving 44 groups of ranges to be repaired. This produces 
> 44*3=132 validation requests contacting all the nodes in the cluster. When 
> the responses for all these requests start to arrive to the coordinator, each 
> containing up to {{repair_session_space}}/3 of Merkle trees, they accumulate 
> quicker than they are consumed, greatly exceeding {{repair_session_space}} 
> and OOMing the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19336) Repair causes out of memory

2024-02-26 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820818#comment-17820818
 ] 

Jeremiah Jordan edited comment on CASSANDRA-19336 at 2/26/24 6:58 PM:
--

You are correct. This was not included in 4.0.12/4.1.5.  Looks like the "latest 
versions" in JIRA were not updated after the last release.


was (Author: jjordan):
You are correct. This was not included in 4.0.12/4.1.5.  Looks like the "latest 
versions" were not updated after the last release.

> Repair causes out of memory
> ---
>
> Key: CASSANDRA-19336
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19336
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
> Fix For: 4.0.x, 4.1.5, 5.0-beta2, 5.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CASSANDRA-14096 introduced {{repair_session_space}} as a limit for the memory 
> usage for Merkle tree calculations during repairs. This limit is applied to 
> the set of Merkle trees built for a received validation request 
> ({{{}VALIDATION_REQ{}}}), divided by the replication factor so as not to 
> overwhelm the repair coordinator, who will have requested RF sets of Merkle 
> trees. That way the repair coordinator should only use 
> {{repair_session_space}} for the RF Merkle trees.
> However, a repair session without {{{}-pr-{}}}/{{{}-partitioner-range{}}} 
> will send RF*RF validation requests, because the repair coordinator node has 
> RF-1 replicas and is also the replica of RF-1 nodes. Since all the requests 
> are sent at the same time, at some point the repair coordinator can have up 
> to RF*{{{}repair_session_space{}}} worth of Merkle trees if none of the 
> validation responses is fully processed before the last response arrives.
> Even worse, if the cluster uses virtual nodes, many nodes can be replicas of 
> the repair coordinator, and some nodes can be replicas of multiple token 
> ranges. It would mean that the repair coordinator can send more than RF or 
> RF*RF simultaneous validation requests.
> For example, in an 11-node cluster with RF=3 and 256 tokens, we have seen a 
> repair session involving 44 groups of ranges to be repaired. This produces 
> 44*3=132 validation requests contacting all the nodes in the cluster. When 
> the responses for all these requests start to arrive to the coordinator, each 
> containing up to {{repair_session_space}}/3 of Merkle trees, they accumulate 
> quicker than they are consumed, greatly exceeding {{repair_session_space}} 
> and OOMing the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19336) Repair causes out of memory

2024-02-26 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19336:

Fix Version/s: 4.0.x
   4.1.5
   (was: 4.0.1)

> Repair causes out of memory
> ---
>
> Key: CASSANDRA-19336
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19336
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
> Fix For: 4.0.x, 4.1.5, 5.0-beta2, 5.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CASSANDRA-14096 introduced {{repair_session_space}} as a limit for the memory 
> usage for Merkle tree calculations during repairs. This limit is applied to 
> the set of Merkle trees built for a received validation request 
> ({{{}VALIDATION_REQ{}}}), divided by the replication factor so as not to 
> overwhelm the repair coordinator, who will have requested RF sets of Merkle 
> trees. That way the repair coordinator should only use 
> {{repair_session_space}} for the RF Merkle trees.
> However, a repair session without {{{}-pr-{}}}/{{{}-partitioner-range{}}} 
> will send RF*RF validation requests, because the repair coordinator node has 
> RF-1 replicas and is also the replica of RF-1 nodes. Since all the requests 
> are sent at the same time, at some point the repair coordinator can have up 
> to RF*{{{}repair_session_space{}}} worth of Merkle trees if none of the 
> validation responses is fully processed before the last response arrives.
> Even worse, if the cluster uses virtual nodes, many nodes can be replicas of 
> the repair coordinator, and some nodes can be replicas of multiple token 
> ranges. It would mean that the repair coordinator can send more than RF or 
> RF*RF simultaneous validation requests.
> For example, in an 11-node cluster with RF=3 and 256 tokens, we have seen a 
> repair session involving 44 groups of ranges to be repaired. This produces 
> 44*3=132 validation requests contacting all the nodes in the cluster. When 
> the responses for all these requests start to arrive to the coordinator, each 
> containing up to {{repair_session_space}}/3 of Merkle trees, they accumulate 
> quicker than they are consumed, greatly exceeding {{repair_session_space}} 
> and OOMing the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19336) Repair causes out of memory

2024-02-26 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820818#comment-17820818
 ] 

Jeremiah Jordan commented on CASSANDRA-19336:
-

You are correct. This was not included in 4.0.12/4.1.5.  Looks like the "latest 
versions" were not updated after the last release.

> Repair causes out of memory
> ---
>
> Key: CASSANDRA-19336
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19336
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
> Fix For: 4.0.x, 4.1.5, 5.0-beta2, 5.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CASSANDRA-14096 introduced {{repair_session_space}} as a limit for the memory 
> usage for Merkle tree calculations during repairs. This limit is applied to 
> the set of Merkle trees built for a received validation request 
> ({{{}VALIDATION_REQ{}}}), divided by the replication factor so as not to 
> overwhelm the repair coordinator, who will have requested RF sets of Merkle 
> trees. That way the repair coordinator should only use 
> {{repair_session_space}} for the RF Merkle trees.
> However, a repair session without {{{}-pr-{}}}/{{{}-partitioner-range{}}} 
> will send RF*RF validation requests, because the repair coordinator node has 
> RF-1 replicas and is also the replica of RF-1 nodes. Since all the requests 
> are sent at the same time, at some point the repair coordinator can have up 
> to RF*{{{}repair_session_space{}}} worth of Merkle trees if none of the 
> validation responses is fully processed before the last response arrives.
> Even worse, if the cluster uses virtual nodes, many nodes can be replicas of 
> the repair coordinator, and some nodes can be replicas of multiple token 
> ranges. It would mean that the repair coordinator can send more than RF or 
> RF*RF simultaneous validation requests.
> For example, in an 11-node cluster with RF=3 and 256 tokens, we have seen a 
> repair session involving 44 groups of ranges to be repaired. This produces 
> 44*3=132 validation requests contacting all the nodes in the cluster. When 
> the responses for all these requests start to arrive to the coordinator, each 
> containing up to {{repair_session_space}}/3 of Merkle trees, they accumulate 
> quicker than they are consumed, greatly exceeding {{repair_session_space}} 
> and OOMing the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19336) Repair causes out of memory

2024-02-26 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19336:

Fix Version/s: 4.0.1
   (was: 4.0.12)
   (was: 4.1.4)

> Repair causes out of memory
> ---
>
> Key: CASSANDRA-19336
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19336
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Repair
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
> Fix For: 4.0.1, 5.0-beta2, 5.1
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> CASSANDRA-14096 introduced {{repair_session_space}} as a limit for the memory 
> usage for Merkle tree calculations during repairs. This limit is applied to 
> the set of Merkle trees built for a received validation request 
> ({{{}VALIDATION_REQ{}}}), divided by the replication factor so as not to 
> overwhelm the repair coordinator, who will have requested RF sets of Merkle 
> trees. That way the repair coordinator should only use 
> {{repair_session_space}} for the RF Merkle trees.
> However, a repair session without {{{}-pr-{}}}/{{{}-partitioner-range{}}} 
> will send RF*RF validation requests, because the repair coordinator node has 
> RF-1 replicas and is also the replica of RF-1 nodes. Since all the requests 
> are sent at the same time, at some point the repair coordinator can have up 
> to RF*{{{}repair_session_space{}}} worth of Merkle trees if none of the 
> validation responses is fully processed before the last response arrives.
> Even worse, if the cluster uses virtual nodes, many nodes can be replicas of 
> the repair coordinator, and some nodes can be replicas of multiple token 
> ranges. It would mean that the repair coordinator can send more than RF or 
> RF*RF simultaneous validation requests.
> For example, in an 11-node cluster with RF=3 and 256 tokens, we have seen a 
> repair session involving 44 groups of ranges to be repaired. This produces 
> 44*3=132 validation requests contacting all the nodes in the cluster. When 
> the responses for all these requests start to arrive to the coordinator, each 
> containing up to {{repair_session_space}}/3 of Merkle trees, they accumulate 
> quicker than they are consumed, greatly exceeding {{repair_session_space}} 
> and OOMing the node.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19046) Paxos V2 does not update individual fields of readMetrics

2023-11-27 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-19046:

Fix Version/s: 5.0-rc

> Paxos V2 does not update individual fields of readMetrics
> -
>
> Key: CASSANDRA-19046
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19046
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Lightweight Transactions, Observability/Metrics
>Reporter: Branimir Lambov
>Priority: Normal
> Fix For: 5.0-rc
>
>
> As a result, {{ClientMetricsTest.testPaxosStatement}} is failing with 
> {{paxos_variant: v2}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19007) Queries with multi-column replica-side filtering can miss rows

2023-11-09 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17784461#comment-17784461
 ] 

Jeremiah Jordan commented on CASSANDRA-19007:
-

+1 for not actually trying to implement 7168 here. There are a bunch of edge 
cases there I don’t think we want to try and solve to fix this issue. The 
reason we still haven’t implemented 7168.

> Queries with multi-column replica-side filtering can miss rows
> --
>
> Key: CASSANDRA-19007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19007
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Andres de la Peña
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> {{SELECT}} queries with multi-column replica-side filtering can miss rows if 
> the filtered columns are spread across out-of-sync replicas. This dtest 
> reproduces the issue:
> {code:java}
> @Test
> public void testMultiColumnReplicaSideFiltering() throws IOException
> {
> try (Cluster cluster = init(Cluster.build().withNodes(2).start()))
> {
> cluster.schemaChange(withKeyspace("CREATE TABLE %s.t (k int PRIMARY 
> KEY, a int, b int)"));
> // insert a split row
> cluster.get(1).executeInternal(withKeyspace("INSERT INTO %s.t(k, a) 
> VALUES (0, 1)"));
> cluster.get(2).executeInternal(withKeyspace("INSERT INTO %s.t(k, b) 
> VALUES (0, 2)"));
> String select = withKeyspace("SELECT * FROM %s.t WHERE a = 1 AND b = 
> 2 ALLOW FILTERING");
> Object[][] initialRows = cluster.coordinator(1).execute(select, ALL);
> assertRows(initialRows, row(0, 1, 2)); // not found!!
> }
> }
> {code}
> This edge case affects queries using {{ALLOW FILTERING}} or any index 
> implementation.
> It affects all branches since multi-column replica-side filtering queries 
> were introduced, long before 3.0.
> The protection mechanism added by CASSANDRA-8272/8273 won't deal with this 
> case, since it only solves single-column conflicts where stale rows could 
> resurrect. This bug however doesn't resurrect data, it can only miss rows 
> while the replicas are out-of-sync.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] (CASSANDRA-19007) Queries with multi-column replica-side filtering can miss rows

2023-11-08 Thread Jeremiah Jordan (Jira)


[ https://issues.apache.org/jira/browse/CASSANDRA-19007 ]


Jeremiah Jordan deleted comment on CASSANDRA-19007:
-

was (Author: jjordan):
After digging into this issue more with discussion on slack I do not see there 
is a way to performantly resolve this?  It's not possible for any of the nodes 
to have answered the query.  This case only happens if the data is completely 
disjoint across all nodes in the cluster.  The only way to resolve it would be 
to send the full data sets back to the coordinator for all queries at ALL.  
This seems unreasonable.  I also don't think there is any guardrails or 
anything we could put in place that would help here.  I think this is just a 
query where it is going to fail until consistency is resolved.

> Queries with multi-column replica-side filtering can miss rows
> --
>
> Key: CASSANDRA-19007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19007
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Andres de la Peña
>Assignee: Caleb Rackliffe
>Priority: Normal
>
> {{SELECT}} queries with multi-column replica-side filtering can miss rows if 
> the filtered columns are spread across out-of-sync replicas. This dtest 
> reproduces the issue:
> {code:java}
> @Test
> public void testMultiColumnReplicaSideFiltering() throws IOException
> {
> try (Cluster cluster = init(Cluster.build().withNodes(2).start()))
> {
> cluster.schemaChange(withKeyspace("CREATE TABLE %s.t (k int PRIMARY 
> KEY, a int, b int)"));
> // insert a split row
> cluster.get(1).executeInternal(withKeyspace("INSERT INTO %s.t(k, a) 
> VALUES (0, 1)"));
> cluster.get(2).executeInternal(withKeyspace("INSERT INTO %s.t(k, b) 
> VALUES (0, 2)"));
> String select = withKeyspace("SELECT * FROM %s.t WHERE a = 1 AND b = 
> 2 ALLOW FILTERING");
> Object[][] initialRows = cluster.coordinator(1).execute(select, ALL);
> assertRows(initialRows, row(0, 1, 2)); // not found!!
> }
> }
> {code}
> This edge case affects queries using {{ALLOW FILTERING}} or any index 
> implementation.
> It affects all branches since multi-column replica-side filtering queries 
> were introduced, long before 3.0.
> The protection mechanism added by CASSANDRA-8272/8273 won't deal with this 
> case, since it only solves single-column conflicts where stale rows could 
> resurrect. This bug however doesn't resurrect data, it can only miss rows 
> while the replicas are out-of-sync.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19007) Queries with multi-column replica-side filtering can miss rows

2023-11-08 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17784122#comment-17784122
 ] 

Jeremiah Jordan edited comment on CASSANDRA-19007 at 11/8/23 7:13 PM:
--

After digging into this issue more with discussion on slack I do not see there 
is a way to performantly resolve this?  It's not possible for any of the nodes 
to have answered the query.  This case only happens if the data is completely 
disjoint across all nodes in the cluster.  The only way to resolve it would be 
to send the full data sets back to the coordinator for all queries at ALL.  
This seems unreasonable.  I also don't think there is any guardrails or 
anything we could put in place that would help here.  I think this is just a 
query where it is going to fail until consistency is resolved.


was (Author: jjordan):
After digging into this issue more with discussion on slack I do not see there 
is a way to possibly resolve this?  It's not possible for any of the nodes to 
have answered the query.  This case only happens if the data is completely 
disjoint across all nodes in the cluster.  The only way to resolve it would be 
to send the full data sets back to the coordinator for all queries at ALL.  
This seems unreasonable.  I also don't think there is any guardrails or 
anything we could put in place that would help here.  I think this is just a 
query where it is going to fail until consistency is resolved.

> Queries with multi-column replica-side filtering can miss rows
> --
>
> Key: CASSANDRA-19007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19007
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Andres de la Peña
>Assignee: Caleb Rackliffe
>Priority: Normal
>
> {{SELECT}} queries with multi-column replica-side filtering can miss rows if 
> the filtered columns are spread across out-of-sync replicas. This dtest 
> reproduces the issue:
> {code:java}
> @Test
> public void testMultiColumnReplicaSideFiltering() throws IOException
> {
> try (Cluster cluster = init(Cluster.build().withNodes(2).start()))
> {
> cluster.schemaChange(withKeyspace("CREATE TABLE %s.t (k int PRIMARY 
> KEY, a int, b int)"));
> // insert a split row
> cluster.get(1).executeInternal(withKeyspace("INSERT INTO %s.t(k, a) 
> VALUES (0, 1)"));
> cluster.get(2).executeInternal(withKeyspace("INSERT INTO %s.t(k, b) 
> VALUES (0, 2)"));
> String select = withKeyspace("SELECT * FROM %s.t WHERE a = 1 AND b = 
> 2 ALLOW FILTERING");
> Object[][] initialRows = cluster.coordinator(1).execute(select, ALL);
> assertRows(initialRows, row(0, 1, 2)); // not found!!
> }
> }
> {code}
> This edge case affects queries using {{ALLOW FILTERING}} or any index 
> implementation.
> It affects all branches since multi-column replica-side filtering queries 
> were introduced, long before 3.0.
> The protection mechanism added by CASSANDRA-8272/8273 won't deal with this 
> case, since it only solves single-column conflicts where stale rows could 
> resurrect. This bug however doesn't resurrect data, it can only miss rows 
> while the replicas are out-of-sync.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19007) Queries with multi-column replica-side filtering can miss rows

2023-11-08 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17784122#comment-17784122
 ] 

Jeremiah Jordan commented on CASSANDRA-19007:
-

After digging into this issue more with discussion on slack I do not see there 
is a way to possibly resolve this?  It's not possible for any of the nodes to 
have answered the query.  This case only happens if the data is completely 
disjoint across all nodes in the cluster.  The only way to resolve it would be 
to send the full data sets back to the coordinator for all queries at ALL.  
This seems unreasonable.  I also don't think there is any guardrails or 
anything we could put in place that would help here.  I think this is just a 
query where it is going to fail until consistency is resolved.

> Queries with multi-column replica-side filtering can miss rows
> --
>
> Key: CASSANDRA-19007
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19007
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Andres de la Peña
>Assignee: Caleb Rackliffe
>Priority: Normal
>
> {{SELECT}} queries with multi-column replica-side filtering can miss rows if 
> the filtered columns are spread across out-of-sync replicas. This dtest 
> reproduces the issue:
> {code:java}
> @Test
> public void testMultiColumnReplicaSideFiltering() throws IOException
> {
> try (Cluster cluster = init(Cluster.build().withNodes(2).start()))
> {
> cluster.schemaChange(withKeyspace("CREATE TABLE %s.t (k int PRIMARY 
> KEY, a int, b int)"));
> // insert a split row
> cluster.get(1).executeInternal(withKeyspace("INSERT INTO %s.t(k, a) 
> VALUES (0, 1)"));
> cluster.get(2).executeInternal(withKeyspace("INSERT INTO %s.t(k, b) 
> VALUES (0, 2)"));
> String select = withKeyspace("SELECT * FROM %s.t WHERE a = 1 AND b = 
> 2 ALLOW FILTERING");
> Object[][] initialRows = cluster.coordinator(1).execute(select, ALL);
> assertRows(initialRows, row(0, 1, 2)); // not found!!
> }
> }
> {code}
> This edge case affects queries using {{ALLOW FILTERING}} or any index 
> implementation.
> It affects all branches since multi-column replica-side filtering queries 
> were introduced, long before 3.0.
> The protection mechanism added by CASSANDRA-8272/8273 won't deal with this 
> case, since it only solves single-column conflicts where stale rows could 
> resurrect. This bug however doesn't resurrect data, it can only miss rows 
> while the replicas are out-of-sync.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18993) Harry-found silent data loss issue

2023-11-08 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-18993:

Description: 
Harry has discovered a silent data loss bug in trunk, but it goes all the way 
back to early 5.0.

Some rows are not visble after flush. Compared Harry repro running a memtable 
instead of a regular Harry model, and it still reproduces (in other words, it 
is almost certainly not a Harry issue).

Simple schema, and only one flush is required for repro, so not an unlikely 
bug. Max partition size is 1k rows, so not an unlikely setup here, either. No 
concurrency involved; reproduces stably. Good chance this is related to the 
fact the schema has DESC clusterings.

I am working on posting a Harry branch that stably reproduces it.

 

  was:
Harry has discovered a silent data loss bug in trunk, but it goes all the way 
back to appx 4.1.

Some rows are not visble after flush. Compared Harry repro running a memtable 
instead of a regular Harry model, and it still reproduces (in other words, it 
is almost certainly not a Harry issue).

Simple schema, and only one flush is required for repro, so not an unlikely 
bug. Max partition size is 1k rows, so not an unlikely setup here, either. No 
concurrency involved; reproduces stably. Good chance this is related to the 
fact the schema has DESC clusterings.

I am working on posting a Harry branch that stably reproduces it.

 


> Harry-found silent data loss issue
> --
>
> Key: CASSANDRA-18993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18993
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
>Reporter: Alex Petrov
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 5.0-beta, 5.x
>
>
> Harry has discovered a silent data loss bug in trunk, but it goes all the way 
> back to early 5.0.
> Some rows are not visble after flush. Compared Harry repro running a memtable 
> instead of a regular Harry model, and it still reproduces (in other words, it 
> is almost certainly not a Harry issue).
> Simple schema, and only one flush is required for repro, so not an unlikely 
> bug. Max partition size is 1k rows, so not an unlikely setup here, either. No 
> concurrency involved; reproduces stably. Good chance this is related to the 
> fact the schema has DESC clusterings.
> I am working on posting a Harry branch that stably reproduces it.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18993) Harry-found silent data loss issue

2023-11-08 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-18993:

Fix Version/s: (was: 4.1.x)

> Harry-found silent data loss issue
> --
>
> Key: CASSANDRA-18993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18993
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
>Reporter: Alex Petrov
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 5.0-beta, 5.x
>
>
> Harry has discovered a silent data loss bug in trunk, but it goes all the way 
> back to appx 4.1.
> Some rows are not visble after flush. Compared Harry repro running a memtable 
> instead of a regular Harry model, and it still reproduces (in other words, it 
> is almost certainly not a Harry issue).
> Simple schema, and only one flush is required for repro, so not an unlikely 
> bug. Max partition size is 1k rows, so not an unlikely setup here, either. No 
> concurrency involved; reproduces stably. Good chance this is related to the 
> fact the schema has DESC clusterings.
> I am working on posting a Harry branch that stably reproduces it.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18993) Harry-found silent data loss issue

2023-11-08 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17784097#comment-17784097
 ] 

Jeremiah Jordan commented on CASSANDRA-18993:
-

{quote}it says "approximately 4.1": at that moment I knew only a lower bound 
for when the issue was not yet present, and I did not know specific version 
where it was present
{quote}
(y)

> Harry-found silent data loss issue
> --
>
> Key: CASSANDRA-18993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18993
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
>Reporter: Alex Petrov
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1.x, 5.0-beta, 5.x
>
>
> Harry has discovered a silent data loss bug in trunk, but it goes all the way 
> back to appx 4.1.
> Some rows are not visble after flush. Compared Harry repro running a memtable 
> instead of a regular Harry model, and it still reproduces (in other words, it 
> is almost certainly not a Harry issue).
> Simple schema, and only one flush is required for repro, so not an unlikely 
> bug. Max partition size is 1k rows, so not an unlikely setup here, either. No 
> concurrency involved; reproduces stably. Good chance this is related to the 
> fact the schema has DESC clusterings.
> I am working on posting a Harry branch that stably reproduces it.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18993) Harry-found silent data loss issue

2023-11-08 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17784062#comment-17784062
 ] 

Jeremiah Jordan commented on CASSANDRA-18993:
-

The description here says you reproduced this issue on 4.1. So do we have a 
similar but different issue in 4.1? Or was that reproduction not correct for 
some reason?

> Harry-found silent data loss issue
> --
>
> Key: CASSANDRA-18993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18993
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
>Reporter: Alex Petrov
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1.x, 5.0-beta, 5.x
>
>
> Harry has discovered a silent data loss bug in trunk, but it goes all the way 
> back to appx 4.1.
> Some rows are not visble after flush. Compared Harry repro running a memtable 
> instead of a regular Harry model, and it still reproduces (in other words, it 
> is almost certainly not a Harry issue).
> Simple schema, and only one flush is required for repro, so not an unlikely 
> bug. Max partition size is 1k rows, so not an unlikely setup here, either. No 
> concurrency involved; reproduces stably. Good chance this is related to the 
> fact the schema has DESC clusterings.
> I am working on posting a Harry branch that stably reproduces it.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-11-06 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783480#comment-17783480
 ] 

Jeremiah Jordan commented on CASSANDRA-18934:
-

By clean commitlog I mean no commitlog. Aka you shutdown using drain such that 
the commitlog is cleared out on shutdown.

> Downgrade to 4.1 fails due to schema changes
> 
>
> Key: CASSANDRA-18934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: David Capwell
>Assignee: Maxwell Guo
>Priority: Normal
> Fix For: 5.0-beta, 5.0.x, 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
> @Test
> public void test() throws Throwable
> {
> AtomicBoolean first = new AtomicBoolean(true);
> new TestCase()
> .nodes(1)
> .withConfig(c -> {
> if (first.compareAndSet(true, false))
> c.set("storage_compatibility_mode", "CASSANDRA_4");
> })
> .downgradeTo(v41)
> .setup(cluster -> {})
> // Uncomment if you want to test what happens after reading the commit log, 
> which fails right now
> //.runBeforeNodeRestart((cluster, nodeId) -> {
> //IUpgradeableInstance inst = cluster.get(nodeId);
> //File f = new File((String) 
> inst.config().get("commitlog_directory"));
> //deleteRecursive(f);
> //})
> .runAfterClusterUpgrade(cluster -> {})
> .run();
> }
> private void deleteRecursive(File f)
> {
> if (f.isDirectory())
> {
> File[] children = f.listFiles();
> if (children != null)
> {
> for (File c : children)
> deleteRecursive(c);
> }
> }
> f.delete();
> }
> }
> {code}
> {code}
> diff --git 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
>  
> b/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
> index 5ee8780204..b

[jira] [Commented] (CASSANDRA-18934) Downgrade to 4.1 fails due to schema changes

2023-11-06 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783259#comment-17783259
 ] 

Jeremiah Jordan commented on CASSANDRA-18934:
-

For 5.0 I think we have said we will support writing out the 4.1 sstable 
format, but not direct downgrade?  So you will need to have a clean commit log 
and remove system tables to downgrade back to 4.1.

> Downgrade to 4.1 fails due to schema changes
> 
>
> Key: CASSANDRA-18934
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18934
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: David Capwell
>Assignee: Maxwell Guo
>Priority: Normal
> Fix For: 5.0-beta, 5.0.x, 5.x
>
>
> We are required to support 5.0 downgrading to 4.1 as a migration step, but we 
> don’t have tests to show this is working… I wrote a quick test to make sure a 
> change we needed in Accord wouldn’t block the downgrade and see that we fail 
> right now.
> {code}
> ERROR 20:56:39 Exiting due to error while processing commit log during 
> initialization.
> org.apache.cassandra.db.commitlog.CommitLogReadHandler$CommitLogReadException:
>  Unexpected error deserializing mutation; saved to 
> /var/folders/h1/s_3p1x3s3hl0hltbpck67m0hgn/T/mutation418421767150092dat.
>   This may be caused by replaying a mutation against a table with the same 
> name but incompatible schema.  Exception follows: java.lang.RuntimeException: 
> Unknown column compaction_properties during deserialization
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readMutation(CommitLogReader.java:464)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readSection(CommitLogReader.java:397)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:244)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReader.readCommitLogSegment(CommitLogReader.java:147)
>   at 
> org.apache.cassandra.db.commitlog.CommitLogReplayer.replayFiles(CommitLogReplayer.java:191)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverFiles(CommitLog.java:223)
>   at 
> org.apache.cassandra.db.commitlog.CommitLog.recoverSegmentsOnDisk(CommitLog.java:204)
> {code}
> This was caused by a schema change in CASSANDRA-18061
> {code}
> /*
>  * Licensed to the Apache Software Foundation (ASF) under one
>  * or more contributor license agreements.  See the NOTICE file
>  * distributed with this work for additional information
>  * regarding copyright ownership.  The ASF licenses this file
>  * to you under the Apache License, Version 2.0 (the
>  * "License"); you may not use this file except in compliance
>  * with the License.  You may obtain a copy of the License at
>  *
>  * http://www.apache.org/licenses/LICENSE-2.0
>  *
>  * Unless required by applicable law or agreed to in writing, software
>  * distributed under the License is distributed on an "AS IS" BASIS,
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>  * See the License for the specific language governing permissions and
>  * limitations under the License.
>  */
> package org.apache.cassandra.distributed.upgrade;
> import java.io.IOException;
> import java.io.File;
> import java.util.concurrent.atomic.AtomicBoolean;
> import org.junit.Test;
> import org.apache.cassandra.distributed.api.IUpgradeableInstance;
> public class DowngradeTest extends UpgradeTestBase
> {
> @Test
> public void test() throws Throwable
> {
> AtomicBoolean first = new AtomicBoolean(true);
> new TestCase()
> .nodes(1)
> .withConfig(c -> {
> if (first.compareAndSet(true, false))
> c.set("storage_compatibility_mode", "CASSANDRA_4");
> })
> .downgradeTo(v41)
> .setup(cluster -> {})
> // Uncomment if you want to test what happens after reading the commit log, 
> which fails right now
> //.runBeforeNodeRestart((cluster, nodeId) -> {
> //IUpgradeableInstance inst = cluster.get(nodeId);
> //File f = new File((String) 
> inst.config().get("commitlog_directory"));
> //deleteRecursive(f);
> //})
> .runAfterClusterUpgrade(cluster -> {})
> .run();
> }
> private void deleteRecursive(File f)
> {
> if (f.isDirectory())
> {
> File[] children = f.listFiles();
> if (children != null)
> {
> for (File c : children)
> deleteRecursive(c);
> }
> }
> f.delete();
> }
> }
> {code}
> {code}
> diff --git 
> a/test/distributed/org/apache/cassandra/distributed/upgrade/UpgradeTestBase.java
>  
> b/test/distributed/

[jira] [Commented] (CASSANDRA-18940) SAI post-filtering reads don't update local table latency metrics

2023-11-06 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783257#comment-17783257
 ] 

Jeremiah Jordan commented on CASSANDRA-18940:
-

Should this update the normal local table reads metrics?  Not sure it should?  
It should probably have its own metric for that?

> SAI post-filtering reads don't update local table latency metrics
> -
>
> Key: CASSANDRA-18940
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18940
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/2i Index, Feature/SAI, Observability/Metrics
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0-beta, 5.x
>
> Attachments: 
> draft_fix_for_SAI_post-filtering_reads_not_updating_local_table_metrics.patch
>
>
> Once an SAI index finds matches (primary keys), it reads the associated rows 
> and post-filters them to incorporate partial writes, tombstones, etc. 
> However, those reads are not currently updating the local table latency 
> metrics. It should be simple enough to attach a metrics recording 
> transformation to the iterator produced by querying local storage. (I've 
> attached a patch that should apply cleanly to trunk, but there may be a 
> better way...)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18710) Test failure: org.apache.cassandra.io.DiskSpaceMetricsTest.testFlushSize-.jdk17 (from org.apache.cassandra.io.DiskSpaceMetricsTest-.jdk17)

2023-11-06 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-18710:

Fix Version/s: 4.1.x

> Test failure: 
> org.apache.cassandra.io.DiskSpaceMetricsTest.testFlushSize-.jdk17 (from 
> org.apache.cassandra.io.DiskSpaceMetricsTest-.jdk17)
> --
>
> Key: CASSANDRA-18710
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18710
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Ekaterina Dimitrova
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1.x, 5.0-beta, 5.0.x, 5.x
>
> Attachments: org.apache.cassandra.io.DiskSpaceMetricsTest.txt
>
>
> Seen here:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1644/testReport/org.apache.cassandra.io/DiskSpaceMetricsTest/testFlushSize__jdk17/]
> h3.  
> {code:java}
> Error Message
> expected:<7200.0> but was:<1367.83970468544>
> Stacktrace
> junit.framework.AssertionFailedError: expected:<7200.0> but 
> was:<1367.83970468544> at 
> org.apache.cassandra.io.DiskSpaceMetricsTest.testFlushSize(DiskSpaceMetricsTest.java:119)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18993) Harry-found silent data loss issue

2023-11-04 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17782913#comment-17782913
 ] 

Jeremiah Jordan commented on CASSANDRA-18993:
-

What’s the schema?

> Harry-found silent data loss issue
> --
>
> Key: CASSANDRA-18993
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18993
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Core
>Reporter: Alex Petrov
>Priority: Normal
> Fix For: 4.1.x, 5.0-beta, 5.x
>
>
> Harry has discovered a silent data loss bug in trunk, but it goes all the way 
> back to appx 4.1.
> Some rows are not visble after flush. Compared Harry repro running a memtable 
> instead of a regular Harry model, and it still reproduces (in other words, it 
> is almost certainly not a Harry issue).
> Simple schema, and only one flush is required for repro, so not an unlikely 
> bug. Max partition size is 1k rows, so not an unlikely setup here, either. No 
> concurrency involved; reproduces stably. Good chance this is related to the 
> fact the schema has DESC clusterings.
> I am working on posting a Harry branch that stably reproduces it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18892) Distributed tests can return ordering columns that have not been selected

2023-09-29 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17770432#comment-17770432
 ] 

Jeremiah Jordan commented on CASSANDRA-18892:
-

Looks good to me. Nice catch. 

> Distributed tests can return ordering columns that have not been selected
> -
>
> Key: CASSANDRA-18892
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18892
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/java
>Reporter: Mike Adamson
>Assignee: Mike Adamson
>Priority: Normal
>
> The following test fails
> {code:java}
> @Test
> public void incorrectClusteringColumnTest() throws IOException
> {
> try (Cluster cluster = init(Cluster.build(1).start()))
> {
> cluster.schemaChange(withKeyspace("CREATE TABLE %s.t (k int, c int, v 
> int, primary key(k, c))"));
> cluster.coordinator(1).execute(withKeyspace("INSERT INTO %s.t (k, c, 
> v) VALUES (0, 1, 2)"), ConsistencyLevel.QUORUM);
> String query = withKeyspace("SELECT v FROM %s.t WHERE k IN (0, 1) 
> ORDER BY c LIMIT 10");
> assertRows(cluster.coordinator(1).execute(query, 
> ConsistencyLevel.ONE), row(2));
> }
> }
>  {code}
> The query is returning the clustering column c as well as the regular column 
> v.
> The reason for the extra column being returned is that the RowUtil is using 
> ResultMessage.Rows.result.metadata.names instead on 
> ResultMessage.Rows.result.metadata.requestNames(). This last method removes 
> columns that have not been requested.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18836) Replace CRC32 w/ CRC32C in IndexFileUtils.ChecksummingWriter

2023-09-12 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17764347#comment-17764347
 ] 

Jeremiah Jordan commented on CASSANDRA-18836:
-

Yeah. For SAI I think we just change it.

> Replace CRC32 w/ CRC32C in IndexFileUtils.ChecksummingWriter
> 
>
> Key: CASSANDRA-18836
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18836
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI
>Reporter: Caleb Rackliffe
>Priority: Normal
>
> It seems that now we're on Java 11 for 5.0, there isn't much reason not to 
> use CRC32C as a drop-in replacement for CRC32. SAI isn't even released, so 
> has no binary compatibility entanglements, and this should be pretty 
> straightforward.
> See https://github.com/apache/bookkeeper/pull/3309



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16360) CRC32 is inefficient on x86

2023-09-11 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17763835#comment-17763835
 ] 

Jeremiah Jordan commented on CASSANDRA-16360:
-

Now that we are on JDK 11 as the min, +1 for using JDK CRC32C where we can 
rather than CRC32.  For new things I think those should just use CRC32C from 
the start and we can look into a migration plan for existing uses of CRC32.

> CRC32 is inefficient on x86
> ---
>
> Key: CASSANDRA-16360
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16360
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Messaging/Client
>Reporter: Avi Kivity
>Assignee: Maxim Muzafarov
>Priority: Normal
>  Labels: protocolv6
> Fix For: 5.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The client/server protocol specifies CRC24 and CRC32 as the checksum 
> algorithm (cql_protocol_V5_framing.asc). Those however are expensive to 
> compute; this affects both the client and the server.
>  
> A better checksum algorithm is CRC32C, which has hardware support on x86 (as 
> well as other modern architectures).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18821) org.apache.cassandra.distributed.test.guardrails.GuardrailDiskUsageTest failed with Authentication error on host /127.0.0.1:9042: Provided username cassandra and/o

2023-09-05 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762104#comment-17762104
 ] 

Jeremiah Jordan commented on CASSANDRA-18821:
-

Yes. There appear to be many tests which do that already. For example: 
[https://github.com/apache/cassandra/blob/trunk/test/distributed/org/apache/cassandra/distributed/test/UpdateSystemAuthAfterDCExpansionTest.java#L118]

I wonder if dtests should just always set the delay to 0.

> org.apache.cassandra.distributed.test.guardrails.GuardrailDiskUsageTest 
> failed with Authentication error on host /127.0.0.1:9042: Provided username 
> cassandra and/or password are incorrect
> ---
>
> Key: CASSANDRA-18821
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18821
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/java
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-30 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17760523#comment-17760523
 ] 

Jeremiah Jordan edited comment on CASSANDRA-18796 at 8/30/23 4:35 PM:
--

1024 is no comically high.  You have that many with a 3 level LCS.  I am again 
-1 on having any kind of fail threshold by default.  Those slow queries are not 
causing any harm besides being slow, so I see no reason to fail them.  The WARN 
threshold is already there to tell the user why their queries are slow.  No 
need to start failing all their queries when one more sstable shows up to go 
from 1023 to 1024.  This is why I am against a fail threshold, someone who is 
doing this will have an app that is "working", possibly slowly, and then boom 
they flush that file to go over the limit and everything falls over from the 
failure threshold being violated.


was (Author: jjordan):
1024 is no comically high.  You have that many with a 3 level LCS.  I am again 
-1 on having any kind of fail threshold by default.  Those slow queries are not 
causing any harm besides being slow, so I see no reason to fail them.  The WARN 
threshold is already there to tell the user why their queries are slow.  No 
need to start failing all their queries when one more sstable shows up to go 
from 1023 to 1024.  This is why I am against a fail threshold, someone who is 
doing this will have an app that is "working", possibly slowly, and then boom 
they flush that file to go over the limit and everything falls over.

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-30 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17760523#comment-17760523
 ] 

Jeremiah Jordan edited comment on CASSANDRA-18796 at 8/30/23 4:35 PM:
--

1024 is no comically high.  You have that many with a 3 level LCS.  I am again 
-1 on having any kind of fail threshold by default.  Those slow queries are not 
causing any harm besides being slow, so I see no reason to fail them.  The WARN 
threshold is already there to tell the user why their queries are slow.  No 
need to start failing all their queries when one more sstable shows up to go 
from 1023 to 1024.  This is why I am against a fail threshold, someone who is 
doing this will have an app that is "working", possibly slowly, and then boom 
they flush that file to go over the limit and everything falls over.


was (Author: jjordan):
1024 is no comically high.  You have that many with a 3 level LCS.  I am again 
-1 on having any kind of fail threshold by default.  Those slow queries are not 
causing any harm besides being slow, so I see no reason to fail them.  The WARN 
threshold is already there to tell the user thy their queries are slow.  No 
need to start failing all their queries when one more sstable shows up to go 
from 1023 to 1024.  This is why I am against a fail threshold, someone who is 
doing this will have an app that is "working", possibly slowly, and then boom 
they flush that file to go over the limit and everything falls over.

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-30 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17760523#comment-17760523
 ] 

Jeremiah Jordan commented on CASSANDRA-18796:
-

1024 is no comically high.  You have that many with a 3 level LCS.  I am again 
-1 on having any kind of fail threshold by default.  Those slow queries are not 
causing any harm besides being slow, so I see no reason to fail them.  The WARN 
threshold is already there to tell the user thy their queries are slow.  No 
need to start failing all their queries when one more sstable shows up to go 
from 1023 to 1024.  This is why I am against a fail threshold, someone who is 
doing this will have an app that is "working", possibly slowly, and then boom 
they flush that file to go over the limit and everything falls over.

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-29 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759953#comment-17759953
 ] 

Jeremiah Jordan commented on CASSANDRA-18796:
-

I still think something like warn at 32? 50? and never fail is the right 
default. And maybe allow someone to set fail to zero to prevent all non 
partition restricted queries. 

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-25 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759101#comment-17759101
 ] 

Jeremiah Jordan edited comment on CASSANDRA-18796 at 8/25/23 4:22 PM:
--

16/128 Seems very low to me.  Do you have benchmarks to prove out where the 
number of sstables searched starts being problematic?  Also from my 
understanding SAI Segments the index data in the files.  So how is 128 small 
sstables being searched that different from 1 giant sstables that has 128 index 
segments?

Not saying this isn't something we want to warn people about.  But I really 
think the warning is the main thing, not failing queries necessarily.  The 
queries will timeout/be slow if the number is excessive for someone's database, 
and we should make sure to provide the user with the possible reasons why in a 
warning and in metrics.


was (Author: jjordan):
16/128 Seems very low to me.  Do you have benchmarks to prove out where the 
number of sstables searched starts being problematic?  Also from my 
understanding SAI Segments the index data in the files.  So how is 128 small 
sstables being searched that different from 1 giant sstables that has 128 index 
segments?

Not saying this isn't something we want to warn people about.  But I really 
think the warning is the main thing, not failing queries necessarily.  The 
queries will timeout/be slow if the number is excessive for someone's database, 
and we should make sure to drive the user with the possible reason why in a 
warning and in metrics.

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-25 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759101#comment-17759101
 ] 

Jeremiah Jordan commented on CASSANDRA-18796:
-

16/128 Seems very low to me.  Do you have benchmarks to prove out where the 
number of sstables searched starts being problematic?  Also from my 
understanding SAI Segments the index data in the files.  So how is 128 small 
sstables being searched that different from 1 giant sstables that has 128 index 
segments?

Not saying this isn't something we want to warn people about.  But I really 
think the warning is the main thing, not failing queries necessarily.  The 
queries will timeout/be slow if the number is excessive for someone's database, 
and we should make sure to drive the user with the possible reason why in a 
warning and in metrics.

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-25 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759076#comment-17759076
 ] 

Jeremiah Jordan edited comment on CASSANDRA-18796 at 8/25/23 3:18 PM:
--

I really don't think we should fail queries just because someone is using LCS.  
-1 on that hard failure.  You can change your LCS file size and then you will 
not have thousands of sstables.  It all comes down to tuning.

I think it would be more productive to issue a WARN if the number of sstables 
searched is over some threshold, and then suggest to the user that they should 
look into tuning their compaction options differently.


was (Author: jjordan):
I really don't think we should fail queries just because someone is using LCS.  
You can change your LCS file size and then you will not have thousands of 
sstables.  It all comes down to tuning.  I think it would be more productive to 
issue a WARN if the number of sstables searched is over some threshold, and 
then suggest to the user that they should look into tuning their compaction 
options differently.

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18796) Optionally fail when a non-partition-restricted query is issued against a storage-attached index with a backing table using LCS

2023-08-25 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759076#comment-17759076
 ] 

Jeremiah Jordan commented on CASSANDRA-18796:
-

I really don't think we should fail queries just because someone is using LCS.  
You can change your LCS file size and then you will not have thousands of 
sstables.  It all comes down to tuning.  I think it would be more productive to 
issue a WARN if the number of sstables searched is over some threshold, and 
then suggest to the user that they should look into tuning their compaction 
options differently.

> Optionally fail when a non-partition-restricted query is issued against a 
> storage-attached index with a backing table using LCS
> ---
>
> Key: CASSANDRA-18796
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18796
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/2i Index, Feature/SAI, Local/Compaction/LCS
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.0.x, 5.x
>
>
> With LCS, we will have potentially thousands of SSTables for a given user 
> table. Storage-attached also means SSTable-attached, and searching thousands 
> of attached indexes is not going to scale well at all locally, due to the 
> sheer number of searches and amount of postings list merging involved. We 
> should have a guardrail to prohibit this by default.
> Partition-restricted queries, the use-case SAI is broadly designed for, 
> should be very efficient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18693) User credentials are validated with LOCAL_QUORUM instead of with LOCAL_ONE

2023-07-26 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-18693:

Bug Category: Parent values: Documentation(13562)
 Component/s: Documentation/Website

> User credentials are validated with LOCAL_QUORUM instead of with LOCAL_ONE
> --
>
> Key: CASSANDRA-18693
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18693
> Project: Cassandra
>  Issue Type: Bug
>  Components: Documentation/Website
>Reporter: Michiel Saelen
>Priority: Normal
> Attachments: image.png
>
>
> The documentation 
> ([https://cassandra.apache.org/doc/latest/cassandra/operating/security.html]) 
> tells us that users should be authenticated with LOCAL_ONE (except the 
> default user).
> From logging we see it is trying with LOCAL_QUORUM.INFO 
> [Native-Transport-Requests-1] 2023-07-24 14:51:11,526 NoSpamLogger.java:105 - 
> "Cannot achieve consistency level LOCAL_QUORUM" while executing SELECT 
> salted_hash FROM system_auth.roles WHERE role = 'test' ALLOW FILTERING
> This is detected on C* 4.1.2 (RHEL)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18615) CREATE INDEX Modifications for Initial Release of SAI

2023-07-07 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17741208#comment-17741208
 ] 

Jeremiah Jordan commented on CASSANDRA-18615:
-

Listing some names that come to mind after reading these comments:
local_table_legacy

equality_only_local_table

hidden_local_table

I like the word local being in the name since they use the LocalPartitioner.

> CREATE INDEX Modifications for Initial Release of SAI
> -
>
> Key: CASSANDRA-18615
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18615
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Syntax, Feature/SAI
>Reporter: Caleb Rackliffe
>Assignee: Caleb Rackliffe
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> After a lengthy discussion on the dev list, the community seems to have 
> arrived at the following list of TODOs before we release SAI in 5.0:
> 1.) CREATE INDEX should be expanded to support {{USING … WITH OPTIONS…}}
> Essentially, we should be able to do something like {{CREATE INDEX ON tbl(v) 
> USING ’sai’ WITH OPTIONS = ...}} and {{CREATE INDEX ON tbl(v) USING 
> ‘cassandra’}} as a more specific/complete way to emulate the current behavior 
> of {{CREATE INDEX}}.
> 2.) Allow operators to configure, in the YAML, a.) whether an index 
> implementation must be specified w/ USING and {{CREATE INDEX}} and b.) what 
> the default implementation will be, if {{USING}} isn’t required.
> 3.) The defaults we ship w/ will avoid breaking existing {{CREATE INDEX}} 
> usage. (i.e. A default is allowed, and that default will remain ‘cassandra’, 
> or the legacy 2i)
> With all this in place, users should be able create SAI indexes w/ the 
> simplest possible syntax, no defaults will change, and operators will have 
> the ability to change defaults to favor SAI whenever they like.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18554) mTLS based client and internode authenticators

2023-06-13 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17732122#comment-17732122
 ] 

Jeremiah Jordan commented on CASSANDRA-18554:
-

I was just looking over the JIRA and noticed this adds new CQL syntax.  I think 
we should at the least have a VOTE on dev@ approving new CQL syntax, and at the 
most there should be a CEP about it.

> mTLS based client and internode authenticators
> --
>
> Key: CASSANDRA-18554
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18554
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Authorization
>Reporter: Jyothsna Konisa
>Assignee: Jyothsna Konisa
>Priority: Normal
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Cassandra currently doesn't have any certificate based authenticator for both 
> client connections and internode connections. If one wants to use certificate 
> based authentication protocol like TLS, in which clients send their 
> certificates for the TLS handshake, we can leverage the information from the 
> client certificate to identify a client. Using this authentication mechanism 
> one can avoid the pain of password generations, sharing and rotation.
> Introducing following certificate based mTLS authenticators for internode and 
> client connections
> MutualTlsAuthenticator (client authentication)
> MutualTlsInternodeAuthenticator (internode authentication)
> MutualTlsWithPasswordFallbackAuthenticator (for optional mode operation for 
> client authentication)
> An implementation of MutualTlsCertificateValidator called 
> SpiffeCertificateValidator whose identity is SPIFFE that is embedded in SAN 
> of the client certificate. One can implement their own CertificateValidator 
> to match their needs and configure it in Cassandra.yaml 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18120) Single slow node dramatically reduces cluster write throughput regardless of CL

2023-06-12 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17731781#comment-17731781
 ] 

Jeremiah Jordan commented on CASSANDRA-18120:
-

{quote}The dynamic snitch measures reads, so it's only useful for determining 
read performance on other nodes, not whether they are likely to accept a write.
{quote}
We did a lot of testing around this a while back, and as long as you have a 
read workload going on it is a pretty good proxy for how fast writes will be.

[https://docs.datastax.com/en/dse/6.8/dse-dev/datastax_enterprise/config/configCassandra_yaml.html#configCassandra_yaml__batchlog_endpoint_strategy]

In DataStax Enterprise we let users pick from three strategies for picking 
batchlog endpoints.
 # completely random: try not to use the local rack (similar to what happens 
today)
 # dynamic remote first: use dynamic snitch data if it exists, don't use the 
local rack if possible
 # dynamic allow local: use the dynamic snitch data if it exists, local node 
allowed.

If there is an ongoing read workload then dynamic remote first does a pretty 
good job of avoiding overloaded nodes, while still keeping it in another rack 
if possible.

> Single slow node dramatically reduces cluster write throughput regardless of 
> CL
> ---
>
> Key: CASSANDRA-18120
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18120
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Dan Sarisky
>Assignee: Maxim Chanturiay
>Priority: Normal
>
> We issue writes to Cassandra as logged batches(RF=3, Consistency levels=TWO, 
> QUORUM, or LOCAL_QUORUM)
>  
> On clusters of any size - a single extremely slow node causes a ~90% loss of 
> cluster-wide throughput using batched writes.  We can replicate this in the 
> lab via CPU or disk throttling.  I observe this in 3.11, 4.0, and 4.1.
>  
> It appears the mechanism in play is:
> Those logged batches are immediately written to two replica nodes and the 
> actual mutations aren't processed until those two nodes acknowledge the batch 
> statements.  Those replica nodes are selected randomly from all nodes in the 
> local data center currently up in gossip.  If a single node is slow, but 
> still thought to be up in gossip, this eventually causes every other node to 
> have all of its MutationStages to be waiting while the slow replica accepts 
> batch writes.
>  
> The code in play appears to be:
> See
> [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/locator/ReplicaPlans.java#L245].
>   In the method filterBatchlogEndpoints() there is a
> Collections.shuffle() to order the endpoints and a
> FailureDetector.isEndpointAlive() to test if the endpoint is acceptable.
>  
> This behavior causes Cassandra to move from a multi-node fault tolerant 
> system toa collection of single points of failure.
>  
> We try to take administrator actions to kill off the extremely slow nodes, 
> but it would be great to have some notion of "what node is a bad choice" when 
> writing log batches to replica nodes.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18548) [Analytics] Add .asf.yaml file to the Cassandra Analytics repository

2023-05-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17726001#comment-17726001
 ] 

Jeremiah Jordan edited comment on CASSANDRA-18548 at 5/24/23 10:47 PM:
---

Probably copy what harry has?

[https://github.com/apache/cassandra-harry/blob/trunk/.asf.yaml#L1-L4]


notifications:
  commits:      commits@cassandra.apache.org
  issues:       commits@cassandra.apache.org
  pullrequests: p...@cassandra.apache.org


was (Author: jjordan):
My guess would be we want to set issues, commits, and pullrequests 
notifications to all go to commits@

> [Analytics] Add .asf.yaml file to the Cassandra Analytics repository
> 
>
> Key: CASSANDRA-18548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18548
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Analytics Library
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 1.0.0
>
>
> We need to add the {{.asf.yaml}} file to be able to control notifications and 
> the GitHub settings for the Cassandra Analytics project.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18548) [Analytics] Add .asf.yaml file to the Cassandra Analytics repository

2023-05-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17726001#comment-17726001
 ] 

Jeremiah Jordan commented on CASSANDRA-18548:
-

My guess would be we want to set issues, commits, and pullrequests 
notifications to all go to commits@

> [Analytics] Add .asf.yaml file to the Cassandra Analytics repository
> 
>
> Key: CASSANDRA-18548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18548
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Analytics Library
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 1.0.0
>
>
> We need to add the {{.asf.yaml}} file to be able to control notifications and 
> the GitHub settings for the Cassandra Analytics project.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18548) [Analytics] Add .asf.yaml file to the Cassandra Analytics repository

2023-05-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17726000#comment-17726000
 ] 

Jeremiah Jordan edited comment on CASSANDRA-18548 at 5/24/23 10:43 PM:
---

The file was merged.  Supposedly there are instructions here: 
[https://s.apache.org/asfyaml] but I am not a PMC or committer and can't see 
that page (I assume that's why anyway, I opened an INFRA ticket asking and they 
pointed me to that link).

I see zookeeper specifically calls out commits@ in their file.
[https://github.com/apache/zookeeper/blob/master/.asf.yaml#L47]


was (Author: jjordan):
The file was merged.  Supposedly there are instructions here: 
[https://s.apache.org/asfyaml] but I am not a PMC or committer and can't see 
that page (I assume that's why anyone, I opened an INFRA ticket asking and they 
pointed me to that link).

I see zookeeper specifically calls out commits@ in their file.
[https://github.com/apache/zookeeper/blob/master/.asf.yaml#L47]

> [Analytics] Add .asf.yaml file to the Cassandra Analytics repository
> 
>
> Key: CASSANDRA-18548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18548
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Analytics Library
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 1.0.0
>
>
> We need to add the {{.asf.yaml}} file to be able to control notifications and 
> the GitHub settings for the Cassandra Analytics project.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18548) [Analytics] Add .asf.yaml file to the Cassandra Analytics repository

2023-05-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17726000#comment-17726000
 ] 

Jeremiah Jordan commented on CASSANDRA-18548:
-

The file was merged.  Supposedly there are instructions here: 
[https://s.apache.org/asfyaml] but I am not a PMC or committer and can't see 
that page (I assume that's why anyone, I opened an INFRA ticket asking and they 
pointed me to that link).

I see zookeeper specifically calls out commits@ in their file.
[https://github.com/apache/zookeeper/blob/master/.asf.yaml#L47]

> [Analytics] Add .asf.yaml file to the Cassandra Analytics repository
> 
>
> Key: CASSANDRA-18548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18548
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Analytics Library
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 1.0.0
>
>
> We need to add the {{.asf.yaml}} file to be able to control notifications and 
> the GitHub settings for the Cassandra Analytics project.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18548) [Analytics] Add .asf.yaml file to the Cassandra Analytics repository

2023-05-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725997#comment-17725997
 ] 

Jeremiah Jordan commented on CASSANDRA-18548:
-

Looks like the settings are not right.  GitHub messages are being sent to dev@ 
not commits@.

> [Analytics] Add .asf.yaml file to the Cassandra Analytics repository
> 
>
> Key: CASSANDRA-18548
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18548
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Analytics Library
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 1.0.0
>
>
> We need to add the {{.asf.yaml}} file to be able to control notifications and 
> the GitHub settings for the Cassandra Analytics project.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-18424) Implement graceful paging across tombstones with short-circuit on paging rather than throwing TombstoneOverwhelmingExceptions

2023-05-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725920#comment-17725920
 ] 

Jeremiah Jordan edited comment on CASSANDRA-18424 at 5/24/23 6:54 PM:
--

[~jmckenzie] I would talk with [~jlewandowski] and the work he is doing for 
CASSANDRA-11745.  There are some edge cases around stopping pagination early 
that had to be figured out, don't want you having to do that work twice.

Also to me this work lines up nicely with "paging in bytes" limits, tombstones 
count towards your bytes...


was (Author: jjordan):
[~jmckenzie] I would talk with [~jlewandowski] and the work he is doing for 
CASSANDRA-11745.  There are some edge cases around stopping pagination early 
that had to be figured out, don't want you having to do that work twice.

> Implement graceful paging across tombstones with short-circuit on paging 
> rather than throwing TombstoneOverwhelmingExceptions
> -
>
> Key: CASSANDRA-18424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18424
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Messaging/Client, Messaging/Internode
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
>
> We implemented the hard stop with a {{TombstoneOverwhelmingException}} almost 
> a decade ago since paging across many tombstones was the most common way for 
> nodes to OOM as they iterated across all this data during queries and paging.
> With our current implementations and architecture / codebase, we should be 
> able to combine the {{StoppingTransformation}} and existing {{clustering}} 
> blob we pass back to clients to allow clients to optionally page across 
> tombstones when using the async api via the driver and short-circuit a page 
> when they hit the tombstone failure threshold rather than throwing a 
> {{{}TombstoneOverwhelmingException{}}}. This would allow for more flexible 
> data modeling on users' side as well as removing one of the fairly rough 
> edges of our API's we're currently constrained by.
> Making sure this is correct will require extensive fuzz-testing of 
> pagination; this should likely happen in the Harry project but we could also 
> have a bespoke model / implementation in the C* codebase we rely on in the 
> interim.
> Client warnings at the current default levels would remain; the gap between 
> warn and "short-circuit pages" (100x ratio currently, 1000 vs. 10) should 
>  be sufficient for clients to take action on their data models well before 
> they hit this limit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18424) Implement graceful paging across tombstones with short-circuit on paging rather than throwing TombstoneOverwhelmingExceptions

2023-05-24 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725920#comment-17725920
 ] 

Jeremiah Jordan commented on CASSANDRA-18424:
-

[~jmckenzie] I would talk with [~jlewandowski] and the work he is doing for 
CASSANDRA-11745.  There are some edge cases around stopping pagination early 
that had to be figured out, don't want you having to do that work twice.

> Implement graceful paging across tombstones with short-circuit on paging 
> rather than throwing TombstoneOverwhelmingExceptions
> -
>
> Key: CASSANDRA-18424
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18424
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Messaging/Client, Messaging/Internode
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
>
> We implemented the hard stop with a {{TombstoneOverwhelmingException}} almost 
> a decade ago since paging across many tombstones was the most common way for 
> nodes to OOM as they iterated across all this data during queries and paging.
> With our current implementations and architecture / codebase, we should be 
> able to combine the {{StoppingTransformation}} and existing {{clustering}} 
> blob we pass back to clients to allow clients to optionally page across 
> tombstones when using the async api via the driver and short-circuit a page 
> when they hit the tombstone failure threshold rather than throwing a 
> {{{}TombstoneOverwhelmingException{}}}. This would allow for more flexible 
> data modeling on users' side as well as removing one of the fairly rough 
> edges of our API's we're currently constrained by.
> Making sure this is correct will require extensive fuzz-testing of 
> pagination; this should likely happen in the Harry project but we could also 
> have a bespoke model / implementation in the C* codebase we rely on in the 
> interim.
> Client warnings at the current default levels would remain; the gap between 
> warn and "short-circuit pages" (100x ratio currently, 1000 vs. 10) should 
>  be sufficient for clients to take action on their data models well before 
> they hit this limit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18204) CEP-15: (C*) Add git submodule for Accord

2023-05-16 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17723273#comment-17723273
 ] 

Jeremiah Jordan commented on CASSANDRA-18204:
-

I do not think we had the concept of long running feature branches the last 
time we discussed version number hygiene. I posted a DISCUSS thread.

> CEP-15: (C*) Add git submodule for Accord
> -
>
> Key: CASSANDRA-18204
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18204
> Project: Cassandra
>  Issue Type: Task
>  Components: Accord
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 5.0
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> As talked about in dev@ thread "Intra-project dependencies”, we talked about 
> adding git submodules but before doing this had to work out a few issues 
> first; this ticket is to track this work.
> Goals
> * when checking out an older commit, or pulling in newer commits, the 
> submodule should also be updated automatically
> * release artifact must include the submodule and must be able to build 
> without issue
> * build.xml must be updated to build the submodule
> * build.xml must be updated to release the submodule jar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18204) CEP-15: (C*) Add git submodule for Accord

2023-05-16 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17723265#comment-17723265
 ] 

Jeremiah Jordan commented on CASSANDRA-18204:
-

Process question. Should this have a fixver of 5.0 on it already? I do not see 
it merged to trunk yet, just the accord feature branch.

For SAI which is also using this feature branch method the “reviewed and merged 
to feature branch” tickets are given a version of NA.

Not sure that’s the best way, but I don’t think 5.0 is right to have on a 
ticket that’s not in trunk.  As if we cut the release today this ticket would 
not be there.

I’ll also post something to dev@, we should get everyone on the same page.

> CEP-15: (C*) Add git submodule for Accord
> -
>
> Key: CASSANDRA-18204
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18204
> Project: Cassandra
>  Issue Type: Task
>  Components: Accord
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 5.0
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> As talked about in dev@ thread "Intra-project dependencies”, we talked about 
> adding git submodules but before doing this had to work out a few issues 
> first; this ticket is to track this work.
> Goals
> * when checking out an older commit, or pulling in newer commits, the 
> submodule should also be updated automatically
> * release artifact must include the submodule and must be able to build 
> without issue
> * build.xml must be updated to build the submodule
> * build.xml must be updated to release the submodule jar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18507) Partial compaction can resurrect deleted data

2023-05-09 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-18507:

Test and Documentation Plan: Unit Tests
 Status: Patch Available  (was: Open)

> Partial compaction can resurrect deleted data
> -
>
> Key: CASSANDRA-18507
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18507
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Tobias Lindaaker
>Assignee: Tobias Lindaaker
>Priority: Normal
>
> If there isn't enough disk space available to compact all existing sstables, 
> Cassandra will attempt to perform a partial compaction by removing sstables 
> from the set of candidate sstables to be compacted, starting with the largest 
> one. It is possible that the sstable removed from the set of sstables to 
> compact contains data for which there are tombstones in another (more recent) 
> sstable. Since the overlaps between sstables is computed when the 
> {{CompactionController}} is created, and the {{CompactionController}} is 
> created before the removal of any sstables from the set of sstables to be 
> compacted this computed overlap will be outdated when checking which sstables 
> are covered by certain tombstones. This leads to the faulty conclusion that 
> the tombstones can be pruned during the compaction, causing the data to be 
> resurrected.
> The issue is present in Cassandra 4.0 and 4.1. Cassandra 3.11 creates the 
> {{CompactionController}} after the set of sstables to compact has been 
> reduced, and is thus not affected. {{trunk}} does not appear to support 
> partial compactions at all, but instead refuses to compact when the disk is 
> full.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-18507) Partial compaction can resurrect deleted data

2023-05-09 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-18507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-18507:

 Bug Category: Parent values: Correctness(12982)Level 1 values: 
Unrecoverable Corruption / Loss(13161)
   Complexity: Normal
  Component/s: Local/Compaction
Discovered By: User Report
 Severity: Critical
 Assignee: Tobias Lindaaker
   Status: Open  (was: Triage Needed)

> Partial compaction can resurrect deleted data
> -
>
> Key: CASSANDRA-18507
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18507
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Compaction
>Reporter: Tobias Lindaaker
>Assignee: Tobias Lindaaker
>Priority: Normal
>
> If there isn't enough disk space available to compact all existing sstables, 
> Cassandra will attempt to perform a partial compaction by removing sstables 
> from the set of candidate sstables to be compacted, starting with the largest 
> one. It is possible that the sstable removed from the set of sstables to 
> compact contains data for which there are tombstones in another (more recent) 
> sstable. Since the overlaps between sstables is computed when the 
> {{CompactionController}} is created, and the {{CompactionController}} is 
> created before the removal of any sstables from the set of sstables to be 
> compacted this computed overlap will be outdated when checking which sstables 
> are covered by certain tombstones. This leads to the faulty conclusion that 
> the tombstones can be pruned during the compaction, causing the data to be 
> resurrected.
> The issue is present in Cassandra 4.0 and 4.1. Cassandra 3.11 creates the 
> {{CompactionController}} after the set of sstables to compact has been 
> reduced, and is thus not affected. {{trunk}} does not appear to support 
> partial compactions at all, but instead refuses to compact when the disk is 
> full.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18352) Add Option to Timebox write timestamps

2023-03-28 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17706140#comment-17706140
 ] 

Jeremiah Jordan commented on CASSANDRA-18352:
-

it is uncommon, but I have seen people use something other than microseconds 
since epoch as their timestamp resolution. Should the guardrail support such 
people by adding a “resolution” parameter for it?

The answer is free to be “no” but we should make sure to ducment it as such in 
that case.

on that note the patch should add commented out entries to the cassandra.yaml 
documenting this new guardrail and its use.

> Add Option to Timebox write timestamps
> --
>
> Key: CASSANDRA-18352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18352
> Project: Cassandra
>  Issue Type: New Feature
>  Components: CQL/Semantics
>Reporter: Jordan West
>Assignee: Jordan West
>Priority: Normal
>
> In several cases it is desirable to have client provided timestamps generated 
> at the application-level. This can be error prone, however. In particular, 
> applications can choose timestamps that may be nonsensical for a given 
> application. One dangerous manifestation of this is the "doomstone" (a 
> tombstone far in the future of any realistic write). This feature would allow 
> either operators or users to specify a minimum and maximum timebound of 
> "reasonable" timestamps. The default would be negative infinity, positive 
> infinity to maintain backwards compatibility. Writes that are USING TIMESTAMP 
> with a timestamp outside of the timebox will see an exception. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-18027) Use G1GC as default

2022-11-12 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-18027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17632675#comment-17632675
 ] 

Jeremiah Jordan commented on CASSANDRA-18027:
-

+1. These settings look good to me. They are what we have been using for 
production systems for a couple years now

> Use G1GC as default
> ---
>
> Key: CASSANDRA-18027
> URL: https://issues.apache.org/jira/browse/CASSANDRA-18027
> Project: Cassandra
>  Issue Type: Task
>  Components: Local/Config
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 4.x
>
>
> G1GC is well battle tested now, and the recommended configuration for most 
> users. CMS can work well on smaller heaps but requires more tuning, initially 
> and over time. G1GC just works. CMS was deprecated in JDK 9.
> Patch at 
> https://github.com/apache/cassandra/compare/trunk...thelastpickle:cassandra:mck/7486/trunk



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17679) Make resumable bootstrap feature optional

2022-08-15 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17579774#comment-17579774
 ] 

Jeremiah Jordan commented on CASSANDRA-17679:
-

{quote}That about cover it?
{quote}
Yes.

 

On the proposal by Paulo.  I think we should not change anything for 4.0.  But 
for 4.1+ I think the proposed setup is a good way forward.
{quote}a) {{-Dcassandra.reset_bootstrap_progress=}} new default behavior 
- fail bootstrap if a previous bootstrap attempt was detected (user needs to 
manually cleanup bootstrap progress).
b) {{-Dcassandra.reset_bootstrap_progress=true}} clear streamed ranges from 
system.available_ranges, and in the future also cleanup incomplete bootstrap 
data on disk (perform this later action in a follow-up ticket)
c) {{-Dcassandra.reset_bootstrap_progress=false}} current default behavior of 
skipping already available ranges during bootstrap streaming.
{quote}
 

If we detect a failed bootstrap then we make the user specify what they want to 
do.

 

Something else to consider is what to do with "nodetool rebuild".  It also uses 
the data from the progress table.  But that as well as trying to remove 
existing data are probably worth a new ticket.

> Make resumable bootstrap feature optional
> -
>
> Key: CASSANDRA-17679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17679
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Streaming
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the patch I'm working on:
> {code}
> # In certain environments, operators may want to disable resumable bootstrap 
> in order to avoid potential correctness
> # violations or data loss scenarios. Largely this centers around nodes going 
> down during bootstrap, tombstones being
> # written, and potential races with repair. By default we leave this on as 
> it's been enabled for quite some time,
> # however the option to disable it is more palatable now that we have zero 
> copy streaming as that greatly accelerates
> # bootstraps. This defaults to true.
> # resumable_bootstrap_enabled: true
> {code}
> Not really a great fit for guardrails as it's less a "feature to be toggled 
> on and off" and more a subset of a specific feature that in certain 
> circumstances can lead to issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-17679) Make resumable bootstrap feature optional

2022-08-03 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17574965#comment-17574965
 ] 

Jeremiah Jordan edited comment on CASSANDRA-17679 at 8/3/22 11:15 PM:
--

{quote}All {{-Dcassandra.reset_bootstrap_progress}} seems to do is truncate the 
system keyspace of ranges, but none of the already-bootstrapped-data?
{quote}
correct. But I don’t see a proposal here to add truncating?  I would definitely 
be in favor of adding the ability to remove all existing data before starting 
the bootstrap again. All the patch here seems to do is skip looking at the 
already streamed ranges, which has the exact same effect as deleting the ranges 
from the system table.

 

I’m not against changing the default behavior. I just don’t like adding new 
flags that do the same thing as existing ones. If we want to change the default 
behavior and add a new yaml flag to complement the existing -D that seems 
reasonable to me.

Also if the main worry here is data resurrection then I would suggest adding 
that truncation ability in. Because streaming from scratch won’t help your 
resurrection problem if someone waits days between retries.


was (Author: jjordan):
{quote}All {{-Dcassandra.reset_bootstrap_progress}} seems to do is truncate the 
system keyspace of ranges, but none of the already-bootstrapped-data?


{quote}
correct. But I don’t see a proposal here to add truncating?  I would definitely 
be in favor of adding the ability to remove all existing data before starting 
the bootstrap again. All the patch here seems to do is skip looking at the 
already streamed ranges, which has the exact same effect as deleting the ranges 
from the system table.

 

I’m not against changing the default behavior. I just don’t like adding new 
flags that do the same thing as existing ones. If we want to change the default 
behavior and add a new yaml flag to complement the existing -D that seems 
reasonable to me.

Also if the main worry here is data redirections then I would suggest adding 
that truncation ability in. Because streaming from scratch won’t help your 
resurrection problem if someone waits days between retries.

> Make resumable bootstrap feature optional
> -
>
> Key: CASSANDRA-17679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17679
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Streaming
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the patch I'm working on:
> {code}
> # In certain environments, operators may want to disable resumable bootstrap 
> in order to avoid potential correctness
> # violations or data loss scenarios. Largely this centers around nodes going 
> down during bootstrap, tombstones being
> # written, and potential races with repair. By default we leave this on as 
> it's been enabled for quite some time,
> # however the option to disable it is more palatable now that we have zero 
> copy streaming as that greatly accelerates
> # bootstraps. This defaults to true.
> # resumable_bootstrap_enabled: true
> {code}
> Not really a great fit for guardrails as it's less a "feature to be toggled 
> on and off" and more a subset of a specific feature that in certain 
> circumstances can lead to issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17679) Make resumable bootstrap feature optional

2022-08-03 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17574965#comment-17574965
 ] 

Jeremiah Jordan commented on CASSANDRA-17679:
-

{quote}All {{-Dcassandra.reset_bootstrap_progress}} seems to do is truncate the 
system keyspace of ranges, but none of the already-bootstrapped-data?


{quote}
correct. But I don’t see a proposal here to add truncating?  I would definitely 
be in favor of adding the ability to remove all existing data before starting 
the bootstrap again. All the patch here seems to do is skip looking at the 
already streamed ranges, which has the exact same effect as deleting the ranges 
from the system table.

 

I’m not against changing the default behavior. I just don’t like adding new 
flags that do the same thing as existing ones. If we want to change the default 
behavior and add a new yaml flag to complement the existing -D that seems 
reasonable to me.

Also if the main worry here is data redirections then I would suggest adding 
that truncation ability in. Because streaming from scratch won’t help your 
resurrection problem if someone waits days between retries.

> Make resumable bootstrap feature optional
> -
>
> Key: CASSANDRA-17679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17679
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Streaming
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the patch I'm working on:
> {code}
> # In certain environments, operators may want to disable resumable bootstrap 
> in order to avoid potential correctness
> # violations or data loss scenarios. Largely this centers around nodes going 
> down during bootstrap, tombstones being
> # written, and potential races with repair. By default we leave this on as 
> it's been enabled for quite some time,
> # however the option to disable it is more palatable now that we have zero 
> copy streaming as that greatly accelerates
> # bootstraps. This defaults to true.
> # resumable_bootstrap_enabled: true
> {code}
> Not really a great fit for guardrails as it's less a "feature to be toggled 
> on and off" and more a subset of a specific feature that in certain 
> circumstances can lead to issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17679) Make resumable bootstrap feature optional

2022-08-03 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17574904#comment-17574904
 ] 

Jeremiah Jordan commented on CASSANDRA-17679:
-

Is there a reason we need a new option here?  We already have the 
"-Dcassandra.reset_bootstrap_progress" flag which disables resuming: 
[https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/service/StorageService.java#L1608]

Do we just need to make the already existing flag more discoverable?

> Make resumable bootstrap feature optional
> -
>
> Key: CASSANDRA-17679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17679
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Streaming
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the patch I'm working on:
> {code}
> # In certain environments, operators may want to disable resumable bootstrap 
> in order to avoid potential correctness
> # violations or data loss scenarios. Largely this centers around nodes going 
> down during bootstrap, tombstones being
> # written, and potential races with repair. By default we leave this on as 
> it's been enabled for quite some time,
> # however the option to disable it is more palatable now that we have zero 
> copy streaming as that greatly accelerates
> # bootstraps. This defaults to true.
> # resumable_bootstrap_enabled: true
> {code}
> Not really a great fit for guardrails as it's less a "feature to be toggled 
> on and off" and more a subset of a specific feature that in certain 
> circumstances can lead to issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-17679) Make resumable bootstrap feature optional

2022-08-03 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17574904#comment-17574904
 ] 

Jeremiah Jordan edited comment on CASSANDRA-17679 at 8/3/22 7:33 PM:
-

Is there a reason we need a new option here?  We already have the 
"-Dcassandra.reset_bootstrap_progress" flag which disables resuming: 
[https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/service/StorageService.java#L1608]

Do we just need to make the already existing flag more discoverable?

Also I would not mention ZCS as a reason to disable this.  ZCS applies pretty 
narrowly right now since it only works for whole files.  So for the most part 
only works if you are using LCS, which is not the default compaction strategy.


was (Author: jjordan):
Is there a reason we need a new option here?  We already have the 
"-Dcassandra.reset_bootstrap_progress" flag which disables resuming: 
[https://github.com/apache/cassandra/blob/cassandra-3.11/src/java/org/apache/cassandra/service/StorageService.java#L1608]

Do we just need to make the already existing flag more discoverable?

> Make resumable bootstrap feature optional
> -
>
> Key: CASSANDRA-17679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17679
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Streaming
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> From the patch I'm working on:
> {code}
> # In certain environments, operators may want to disable resumable bootstrap 
> in order to avoid potential correctness
> # violations or data loss scenarios. Largely this centers around nodes going 
> down during bootstrap, tombstones being
> # written, and potential races with repair. By default we leave this on as 
> it's been enabled for quite some time,
> # however the option to disable it is more palatable now that we have zero 
> copy streaming as that greatly accelerates
> # bootstraps. This defaults to true.
> # resumable_bootstrap_enabled: true
> {code}
> Not really a great fit for guardrails as it's less a "feature to be toggled 
> on and off" and more a subset of a specific feature that in certain 
> circumstances can lead to issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17765) RPM Installation on centos 7 is broken by CASSANDRA-17669

2022-07-22 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17570149#comment-17570149
 ] 

Jeremiah Jordan commented on CASSANDRA-17765:
-

I think a different repo for centos7 that just keeps the 4.0.4 and earlier 
status quo of the package requiring java 8 is reasonable.

> RPM Installation on centos 7 is broken by CASSANDRA-17669
> -
>
> Key: CASSANDRA-17765
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17765
> Project: Cassandra
>  Issue Type: Bug
>  Components: Packaging
>Reporter: Jeremiah Jordan
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-rc, 4.x
>
>
> With CASSANDRA-17669 adding use of "or" in the dependencies for the RPM so it 
> can depend on java 8 or java 11 it broke installation on CentOS Linux 7.  
> This is bad because CentOS Linux 7 is the "current" release of CentOS Linux, 
> version 8 was EOL'ed in favor of the new pre-release distribution model being 
> used for CentOS Stream 8.
> I don't know what the best answer is here, maybe making a CentOS 7 specific 
> package that reverts back to just java 8 in the requirements?  But I think 
> should needs to be done.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17765) RPM Installation on centos 7 is broken by CASSANDRA-17669

2022-07-21 Thread Jeremiah Jordan (Jira)
Jeremiah Jordan created CASSANDRA-17765:
---

 Summary: RPM Installation on centos 7 is broken by CASSANDRA-17669
 Key: CASSANDRA-17765
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17765
 Project: Cassandra
  Issue Type: Bug
  Components: Packaging
Reporter: Jeremiah Jordan


With CASSANDRA-17669 adding use of "or" in the dependencies for the RPM so it 
can depend on java 8 or java 11 it broke installation on CentOS Linux 7.  This 
is bad because CentOS Linux 7 is the "current" release of CentOS Linux, version 
8 was EOL'ed in favor of the new pre-release distribution model being used for 
CentOS Stream 8.

I don't know what the best answer is here, maybe making a CentOS 7 specific 
package that reverts back to just java 8 in the requirements?  But I think 
should needs to be done.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16555) Add out-of-the-box snitch for Ec2 IMDSv2

2022-04-22 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-16555:

Test and Documentation Plan: New snitch should be added to the docs.
 Status: Patch Available  (was: Open)

Just noticed this PR today.  Putting this to patch available, looks like it was 
never transitioned and slipped through.

> Add out-of-the-box snitch for Ec2 IMDSv2
> 
>
> Key: CASSANDRA-16555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16555
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Coordination
>Reporter: Paul Rütter (BlueConic)
>Assignee: fulco taen
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to patch a vulnerability, Amazon came up with a new version of their 
> metadata service.
> It's no longer unrestricted but now requires a token (in a header), in order 
> to access the metadata service.
> See 
> [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html]
>  for more information.
> Cassandra currently doesn't offer an out-of-the-box snitch class to support 
> this.
> See 
> [https://cassandra.apache.org/doc/latest/operating/snitch.html#snitch-classes]
> This issue asks to add support for this as a separate snitch class.
> We'll probably do a PR for this, as we are in the process of developing one.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-16555) Add out-of-the-box snitch for Ec2 IMDSv2

2022-04-22 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan reassigned CASSANDRA-16555:
---

Assignee: fulco taen

> Add out-of-the-box snitch for Ec2 IMDSv2
> 
>
> Key: CASSANDRA-16555
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16555
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Coordination
>Reporter: Paul Rütter (BlueConic)
>Assignee: fulco taen
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In order to patch a vulnerability, Amazon came up with a new version of their 
> metadata service.
> It's no longer unrestricted but now requires a token (in a header), in order 
> to access the metadata service.
> See 
> [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html]
>  for more information.
> Cassandra currently doesn't offer an out-of-the-box snitch class to support 
> this.
> See 
> [https://cassandra.apache.org/doc/latest/operating/snitch.html#snitch-classes]
> This issue asks to add support for this as a separate snitch class.
> We'll probably do a PR for this, as we are in the process of developing one.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17352) CVE-2021-44521: Apache Cassandra: Remote code execution for scripted UDFs

2022-02-17 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17494313#comment-17494313
 ] 

Jeremiah Jordan commented on CASSANDRA-17352:
-

Is there a unit test that can be pushed now that this is public?

> CVE-2021-44521: Apache Cassandra: Remote code execution for scripted UDFs
> -
>
> Key: CASSANDRA-17352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17352
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/UDF
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 3.0.26, 3.11.12, 4.0.2
>
>
> When running Apache Cassandra with the following configuration:
> enable_user_defined_functions: true
> enable_scripted_user_defined_functions: true
> enable_user_defined_functions_threads: false 
> it is possible for an attacker to execute arbitrary code on the host. The 
> attacker would need to have enough permissions to create user defined 
> functions in the cluster to be able to exploit this. Note that this 
> configuration is documented as unsafe, and will continue to be considered 
> unsafe after this CVE.
> This issue is being tracked as CASSANDRA-17352
> Mitigation:
> Set `enable_user_defined_functions_threads: true` (this is default)
> or
> 3.0 users should upgrade to 3.0.26
> 3.11 users should upgrade to 3.11.12
> 4.0 users should upgrade to 4.0.2
> Credit:
> This issue was discovered by Omer Kaspi of the JFrog Security vulnerability 
> research team.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17166) Enhance SnakeYAML properties to be reusable outside of YAML parsing, support camel case conversion to snake case, and add support to ignore properties

2022-02-16 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493617#comment-17493617
 ] 

Jeremiah Jordan commented on CASSANDRA-17166:
-

Yeah. Avoiding accidental overlap is one of the things I was thinking about. 
Someone gets two different behaviors from one property because it is a config 
element name and also a system property.

> Enhance SnakeYAML properties to be reusable outside of YAML parsing, support 
> camel case conversion to snake case, and add support to ignore properties
> --
>
> Key: CASSANDRA-17166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17166
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> SnakeYaml is rather limited in the “object mapping” layer, which forces our 
> internal code to match specific patterns (all fields public and camel case); 
> we can remove this restriction by leveraging Jackson for property lookup, and 
> leaving the YAML handling to SnakeYAML



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-17166) Enhance SnakeYAML properties to be reusable outside of YAML parsing, support camel case conversion to snake case, and add support to ignore properties

2022-02-16 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493458#comment-17493458
 ] 

Jeremiah Jordan edited comment on CASSANDRA-17166 at 2/16/22, 7:29 PM:
---

This property support sounds great.  One nitpick, I would use something 
different than just: -Dcassandra.[Config name]. Maybe 
-Dcassandra.config.[Config Name] or -Dcassandra.settings.[Config Name] or 
-Dcassandraconfig.[Config Name].

Otherwise there could be confusion between config properties and non config 
properties.


was (Author: jjordan):
This property support sounds great.  One nitpick, I would use something 
different than just: -Dcassandra.[Config name]. Maybe 
-Dcassandra.config.[Config Name] or -Dcassandra.settings.[Config Name] or 
-Dcassandraconfig.[Config Name].

> Enhance SnakeYAML properties to be reusable outside of YAML parsing, support 
> camel case conversion to snake case, and add support to ignore properties
> --
>
> Key: CASSANDRA-17166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17166
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> SnakeYaml is rather limited in the “object mapping” layer, which forces our 
> internal code to match specific patterns (all fields public and camel case); 
> we can remove this restriction by leveraging Jackson for property lookup, and 
> leaving the YAML handling to SnakeYAML



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17166) Enhance SnakeYAML properties to be reusable outside of YAML parsing, support camel case conversion to snake case, and add support to ignore properties

2022-02-16 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17493458#comment-17493458
 ] 

Jeremiah Jordan commented on CASSANDRA-17166:
-

This property support sounds great.  One nitpick, I would use something 
different than just: -Dcassandra.[Config name]. Maybe 
-Dcassandra.config.[Config Name] or -Dcassandra.settings.[Config Name] or 
-Dcassandraconfig.[Config Name].

> Enhance SnakeYAML properties to be reusable outside of YAML parsing, support 
> camel case conversion to snake case, and add support to ignore properties
> --
>
> Key: CASSANDRA-17166
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17166
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Config
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> SnakeYaml is rather limited in the “object mapping” layer, which forces our 
> internal code to match specific patterns (all fields public and camel case); 
> we can remove this restriction by leveraging Jackson for property lookup, and 
> leaving the YAML handling to SnakeYAML



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17352) CVE-2021-44521: Apache Cassandra: Remote code execution for scripted UDFs

2022-02-11 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-17352:

Fix Version/s: 4.0.2
   3.11.12
   3.0.26

> CVE-2021-44521: Apache Cassandra: Remote code execution for scripted UDFs
> -
>
> Key: CASSANDRA-17352
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17352
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/UDF
>Reporter: Marcus Eriksson
>Assignee: Marcus Eriksson
>Priority: Normal
> Fix For: 3.0.26, 3.11.12, 4.0.2
>
>
> When running Apache Cassandra with the following configuration:
> enable_user_defined_functions: true
> enable_scripted_user_defined_functions: true
> enable_user_defined_functions_threads: false 
> it is possible for an attacker to execute arbitrary code on the host. The 
> attacker would need to have enough permissions to create user defined 
> functions in the cluster to be able to exploit this. Note that this 
> configuration is documented as unsafe, and will continue to be considered 
> unsafe after this CVE.
> This issue is being tracked as CASSANDRA-17352
> Mitigation:
> Set `enable_user_defined_functions_threads: true` (this is default)
> or
> 3.0 users should upgrade to 3.0.26
> 3.11 users should upgrade to 3.11.12
> 4.0 users should upgrade to 4.0.2
> Credit:
> This issue was discovered by Omer Kaspi of the JFrog Security vulnerability 
> research team.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17372) Dynamic Table TTL

2022-02-10 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17490512#comment-17490512
 ] 

Jeremiah Jordan commented on CASSANDRA-17372:
-

I think we should indeed do this, but we should use "expiration time" or some 
other phrase to talk about this new thing.  This would be something completely 
separate from the per cell TTL we currently have, which is what CQL and current 
table level TTL are setting.

Then we just document that for a table with an expiration time set the data 
timestamp must be in the same format.

> Dynamic Table TTL
> -
>
> Key: CASSANDRA-17372
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17372
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Semantics
>Reporter: Paulo Motta
>Priority: Normal
>
> One limitation of the {{default_time_to_live}} option is that it only applies 
> to newly inserted data so an expensive migration is required when altering 
> the table-level TTL, which is not an uncommon request due to changes in 
> retention policies.
> This seems to have been a deliberate design decision when adding the Table 
> TTL feature on CASSANDRA-3974, due to the reasons stated [on this 
> comment|https://issues.apache.org/jira/browse/CASSANDRA-3974?focusedCommentId=13427314&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-13427314]
>  so we should revisit and potentially address these concerns.
> I would like to explore supporting dynamic TTL, which would reflect any 
> updates to the table-level {{default_time_to_live}} immediately to all table 
> data.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17078) Fix failing test: SSTableReaderTest.testPersistentStatistics

2022-02-07 Thread Jeremiah Jordan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeremiah Jordan updated CASSANDRA-17078:

Reviewers: Aleksei Zotov, Jeremiah Jordan

> Fix failing test: SSTableReaderTest.testPersistentStatistics
> 
>
> Key: CASSANDRA-17078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17078
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>
> JDK8 unit test failure
> See it on CircleCI but not on Jenkins ASF infra right now.
> {code:java}
> java.lang.RuntimeException: Failed importing sstables
>   at 
> org.apache.cassandra.db.SSTableImporter.importNewSSTables(SSTableImporter.java:164)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:734)
>   at 
> org.apache.cassandra.io.sstable.SSTableReaderTest.clearAndLoad(SSTableReaderTest.java:222)
>   at 
> org.apache.cassandra.io.sstable.SSTableReaderTest.testPersistentStatistics(SSTableReaderTest.java:215)
> Caused by: java.lang.RuntimeException: Failed to rename 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-9-big-Digest.crc32
>  to 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-13-big-Digest.crc32
>   at org.apache.cassandra.io.util.PathUtils.rename(PathUtils.java:385)
>   at org.apache.cassandra.io.util.File.move(File.java:227)
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:704)
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:698)
>   at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.rename(SSTableWriter.java:337)
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.moveAndOpenSSTable(SSTableReader.java:2339)
>   at 
> org.apache.cassandra.db.SSTableImporter.importNewSSTables(SSTableImporter.java:139)
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-9-big-Digest.crc32
>  -> 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-13-big-Digest.crc32
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
>   at 
> sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
>   at java.nio.file.Files.move(Files.java:1395)
>   at 
> org.apache.cassandra.io.util.PathUtils.atomicMoveWithFallback(PathUtils.java:396)
>   at org.apache.cassandra.io.util.PathUtils.rename(PathUtils.java:377)
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17078) Fix failing test: SSTableReaderTest.testPersistentStatistics

2022-02-07 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17488260#comment-17488260
 ] 

Jeremiah Jordan commented on CASSANDRA-17078:
-

{{You missed removing store.discardSSTables(System.currentTimeMillis()); in 
}}{{{}testGetPositionsForRanges{}}}.  Otherwise LGTM.

> Fix failing test: SSTableReaderTest.testPersistentStatistics
> 
>
> Key: CASSANDRA-17078
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17078
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>
> JDK8 unit test failure
> See it on CircleCI but not on Jenkins ASF infra right now.
> {code:java}
> java.lang.RuntimeException: Failed importing sstables
>   at 
> org.apache.cassandra.db.SSTableImporter.importNewSSTables(SSTableImporter.java:164)
>   at 
> org.apache.cassandra.db.ColumnFamilyStore.loadNewSSTables(ColumnFamilyStore.java:734)
>   at 
> org.apache.cassandra.io.sstable.SSTableReaderTest.clearAndLoad(SSTableReaderTest.java:222)
>   at 
> org.apache.cassandra.io.sstable.SSTableReaderTest.testPersistentStatistics(SSTableReaderTest.java:215)
> Caused by: java.lang.RuntimeException: Failed to rename 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-9-big-Digest.crc32
>  to 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-13-big-Digest.crc32
>   at org.apache.cassandra.io.util.PathUtils.rename(PathUtils.java:385)
>   at org.apache.cassandra.io.util.File.move(File.java:227)
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:704)
>   at 
> org.apache.cassandra.io.util.FileUtils.renameWithConfirm(FileUtils.java:698)
>   at 
> org.apache.cassandra.io.sstable.format.SSTableWriter.rename(SSTableWriter.java:337)
>   at 
> org.apache.cassandra.io.sstable.format.SSTableReader.moveAndOpenSSTable(SSTableReader.java:2339)
>   at 
> org.apache.cassandra.db.SSTableImporter.importNewSSTables(SSTableImporter.java:139)
> Caused by: java.nio.file.NoSuchFileException: 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-9-big-Digest.crc32
>  -> 
> /tmp/cassandra/build/test/cassandra/data/SSTableReaderTest/Standard1-da581e50380611ecaad41173773221f9/nb-13-big-Digest.crc32
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixCopyFile.move(UnixCopyFile.java:396)
>   at 
> sun.nio.fs.UnixFileSystemProvider.move(UnixFileSystemProvider.java:262)
>   at java.nio.file.Files.move(Files.java:1395)
>   at 
> org.apache.cassandra.io.util.PathUtils.atomicMoveWithFallback(PathUtils.java:396)
>   at org.apache.cassandra.io.util.PathUtils.rename(PathUtils.java:377)
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17185) Contrilbulyze: Generate contributors reports based on git history

2021-12-05 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17453659#comment-17453659
 ] 

Jeremiah Jordan commented on CASSANDRA-17185:
-

Yeah.

> Contrilbulyze: Generate contributors reports based on git history
> -
>
> Key: CASSANDRA-17185
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17185
> Project: Cassandra
>  Issue Type: Task
>  Components: Build
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
>
> Create metrics providing top lists of contributors for different time
> periods. Listing patches, reviews, and who has worked with who.
> Intention is to give insight into the people who are
> contributing code to the project.
> https://nightlies.apache.org/cassandra/devbranch/misc/contribulyze/html/
> It's generated from the contribulyze.py script (copied and evolved from the
> Subversion project) running against our repositories, taking advantage of
> our community's commit message style.
> dev@ ML: https://lists.apache.org/thread/48wgvo5mstoy3ozyqj45s293ddpf86pr 
> PR: https://github.com/apache/cassandra-builds/pull/54



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16619) Loss of commit log data possible after sstable ingest

2021-11-29 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17450642#comment-17450642
 ] 

Jeremiah Jordan commented on CASSANDRA-16619:
-

I would expect someone to see more data replayed, because the intervals would 
not be ignored, but the end result data wise should be the same. The data in 
the commitlogs should be idempotent so it is safe to replay even if it is 
already in a given sstable.

> Loss of commit log data possible after sstable ingest
> -
>
> Key: CASSANDRA-16619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16619
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Commit Log
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 3.0.25, 3.11.11, 4.0-rc2, 4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> SSTable metadata contains commit log positions of the sstable. These 
> positions are used to filter out mutations from the commit log on restart and 
> only make sense for the node on which the data was flushed.
> If an SSTable is moved between nodes they may cover regions that the 
> receiving node has not yet flushed, and result in valid data being lost 
> should these sections of the commit log need to be replayed.
> Solution:
> The chosen solution introduces a new sstable metadata (StatsMetadata) - 
> originatingHostId (UUID), which is the local host id of the node on which the 
> sstable was created, or null if not known. Commit log intervals from an 
> sstable are taken into account during Commit Log replay only when the 
> originatingHostId of the sstable matches the local node's hostId.
> For new sstables the originatingHostId is set according to StorageService's 
> local hostId.
> For compacted sstables the originatingHostId set according to 
> StorageService's local hostId, and only commit log intervals from local 
> sstables is preserved in the resulting sstable.
> discovered by [~jakubzytka]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17116) When zero-copy-streaming sees a channel close this triggers the disk failure policy

2021-11-19 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17446498#comment-17446498
 ] 

Jeremiah Jordan commented on CASSANDRA-17116:
-

I wonder if the answer here is to instead set some state such that the ZCS code 
can catch the exception and check "was this streaming op canceled?" and if so 
not trigger the disk failure?

> When zero-copy-streaming sees a channel close this triggers the disk failure 
> policy
> ---
>
> Key: CASSANDRA-17116
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17116
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Streaming
>Reporter: David Capwell
>Assignee: David Capwell
>Priority: Normal
> Fix For: 4.x
>
>
> Found in CASSANDRA-17085.
> https://app.circleci.com/pipelines/github/dcapwell/cassandra/1069/workflows/26b7b83a-686f-4516-a56a-0709d428d4f2/jobs/7264
> https://app.circleci.com/pipelines/github/dcapwell/cassandra/1069/workflows/26b7b83a-686f-4516-a56a-0709d428d4f2/jobs/7256
> {code}
> ERROR [Stream-Deserializer-/127.0.0.1:7000-f2eb1a15] 2021-11-02 21:35:40,983 
> DefaultFSErrorHandler.java:104 - Exiting forcefully due to file system 
> exception on startup, disk failure policy "stop"
> org.apache.cassandra.io.FSWriteError: java.nio.channels.ClosedChannelException
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableZeroCopyWriter.write(BigTableZeroCopyWriter.java:227)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableZeroCopyWriter.writeComponent(BigTableZeroCopyWriter.java:206)
>   at 
> org.apache.cassandra.db.streaming.CassandraEntireSSTableStreamReader.read(CassandraEntireSSTableStreamReader.java:125)
>   at 
> org.apache.cassandra.db.streaming.CassandraIncomingFile.read(CassandraIncomingFile.java:84)
>   at 
> org.apache.cassandra.streaming.messages.IncomingStreamMessage$1.deserialize(IncomingStreamMessage.java:51)
>   at 
> org.apache.cassandra.streaming.messages.IncomingStreamMessage$1.deserialize(IncomingStreamMessage.java:37)
>   at 
> org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:50)
>   at 
> org.apache.cassandra.streaming.StreamDeserializingTask.run(StreamDeserializingTask.java:62)
>   at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.nio.channels.ClosedChannelException: null
>   at 
> org.apache.cassandra.net.AsyncStreamingInputPlus.reBuffer(AsyncStreamingInputPlus.java:136)
>   at 
> org.apache.cassandra.net.AsyncStreamingInputPlus.consume(AsyncStreamingInputPlus.java:155)
>   at 
> org.apache.cassandra.io.sstable.format.big.BigTableZeroCopyWriter.write(BigTableZeroCopyWriter.java:217)
>   ... 9 common frames omitted
> {code}
> When bootstrap fails and streaming is closed, this triggers the disk failure 
> policy which causes the JVM to halt by default (if this happens outside of 
> bootstrap, then we stop transports and keep the JVM up).
> org.apache.cassandra.streaming.StreamDeserializingTask attempts to handle 
> this by ignoring this exception, but the call to 
> org.apache.cassandra.streaming.messages.IncomingStreamMessage$1.deserialize
>  Does try/catch and inspects exception; triggering this condition.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-9633) Add ability to encrypt sstables

2021-11-15 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17444123#comment-17444123
 ] 

Jeremiah Jordan commented on CASSANDRA-9633:


Is there a link to the branch you have been working on?  Thanks.

> Add ability to encrypt sstables
> ---
>
> Key: CASSANDRA-9633
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9633
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Legacy/Core
>Reporter: Jason Brown
>Assignee: shylaja kokoori
>Priority: Normal
>  Labels: encryption, security, sstable
> Fix For: 4.x
>
>
> Add option to allow encrypting of sstables.
> I have a version of this functionality built on cassandra 2.0 that 
> piggy-backs on the existing sstable compression functionality and ICompressor 
> interface (similar in nature to what DataStax Enterprise does). However, if 
> we're adding the feature to the main OSS product, I'm not sure if we want to 
> use the pluggable compression framework or if it's worth investigating a 
> different path. I think there's a lot of upside in reusing the sstable 
> compression scheme, but perhaps add a new component in cqlsh for table 
> encryption and a corresponding field in CFMD.
> Encryption configuration in the yaml can use the same mechanism as 
> CASSANDRA-6018 (which is currently pending internal review).



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17059) Support adding custom verbs at runtime

2021-11-02 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17437389#comment-17437389
 ] 

Jeremiah Jordan commented on CASSANDRA-17059:
-

Over the coarse of many years of developing features on top of Cassandra the 
ability to plug in to the verb handlers has been very useful.  Anyone trying to 
maintain a QueryHandler or any other feature that needs communication between 
nodes is benefited from this.  In early versions of DSE we implemented a 
completely separate side channel internode communication channel before later 
changing to maintaining a fork of messaging service with these features in it.  
Some things in DSE that used this feature are NodeSync, cross node auth cache 
invalidation, a cql based RPC mechanism, and an alternate scatter gather 
algorithm for Solr based search.  Having such a mechanism means third parties 
can develop such things more easily in external repositories without having to 
maintain a full Cassandra fork or re-implement a new internode connection with 
the increased security surface such a connection brings.

> Support adding custom verbs at runtime
> --
>
> Key: CASSANDRA-17059
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17059
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Matt Fleming
>Priority: Normal
>
> Cassandra already has support for registering custom verbs at build time, but 
> there's value in allowing verbs to be added at runtime since that enables new 
> use cases where it's inconvenient or impossible to modify the Cassandra 
> source.
> Additionally, apps that went to register new verbs benefit from running 
> custom code after the default verb handlers execute. This can be achieved 
> with straightforward modifications to the Sink interface (e.g. adding a 
> PostSink class).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17048) Replace sequential sstable generation identifier with ULID

2021-10-20 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17431278#comment-17431278
 ] 

Jeremiah Jordan commented on CASSANDRA-17048:
-

Given the fact that many people develop on Mac OS, which often does not use a 
case sensitive file system, I think we should stay away from file names that 
require case sensitivity for uniqueness.

> Replace sequential sstable generation identifier with ULID
> --
>
> Key: CASSANDRA-17048
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17048
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Replace the current sequential sstable generation identifier with ULID based.
> ULID is better because we do not need to scan the existing files to pick the 
> starting number as well as we can generate globally unique identifiers. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17048) Replace sequential sstable generation identifier with ULID

2021-10-19 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17430856#comment-17430856
 ] 

Jeremiah Jordan commented on CASSANDRA-17048:
-

{quote}Ah, this library you reference is also LGPL, so is anyway incompatible 
with the Apache 2.0 license.
{quote}
It is dual licensed LGPL or ASLv2

[https://github.com/huxi/sulky/blob/master/sulky-ulid/src/main/java/de/huxhorn/sulky/ulid/ULID.java#L19-L33]

>From [https://github.com/huxi/sulky]
{quote}sulky modules are licensed LGPLv3 & ASLv2.
{quote}
 

 

> Replace sequential sstable generation identifier with ULID
> --
>
> Key: CASSANDRA-17048
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17048
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Replace the current sequential sstable generation identifier with ULID based.
> ULID is better because we do not need to scan the existing files to pick the 
> starting number as well as we can generate globally unique identifiers. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17048) Replace sequential sstable generation identifier with ULID

2021-10-19 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17430600#comment-17430600
 ] 

Jeremiah Jordan commented on CASSANDRA-17048:
-

I think this is a great change.  sstables version number clash has always been 
an issue when backing up and restoring tables.  There was just a conversation 
the other day in the slack where a truncate of a table and then restart cause 
the sstables numbers to reset, which then messed up someone’s backups.

> Replace sequential sstable generation identifier with ULID
> --
>
> Key: CASSANDRA-17048
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17048
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/SSTable
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1
>
>
> Replace the current sequential sstable generation identifier with ULID based.
> ULID is better because we do not need to scan the existing files to pick the 
> starting number as well as we can generate globally unique identifiers. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17044) Refactor schema management to allow for schema source pluggability

2021-10-15 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17429355#comment-17429355
 ] 

Jeremiah Jordan commented on CASSANDRA-17044:
-

A CEP around this is in process, should be ready to propose in the next week.

> Refactor schema management to allow for schema source pluggability
> --
>
> Key: CASSANDRA-17044
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17044
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Schema
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
>
> The idea is decompose `Schema` into separate entities responsible for 
> different things. In particular extract what is related to schema storage and 
> synchronization into a separate class so that it is possible to create an 
> extension point there and store schema in a different way than 
> `system_schema` keyspace, for example in etcd. 
> This would also simplify the logic and reduce the number of special cases, 
> make all the things more testable and the logic of internal classes 
> encapsulated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12988) make the consistency level for user-level auth reads and writes configurable

2021-09-21 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17418214#comment-17418214
 ] 

Jeremiah Jordan commented on CASSANDRA-12988:
-

Right, I think the main thing is the checks around auto creation of the 
cassandra user.  Might be nice to add a -D to disable the auto creation code 
completely while doing this.  The paranoid among us could set that flag once 
the cluster is setup and then not have to worry about that code doing something 
funny down the line.

> make the consistency level for user-level auth reads and writes configurable
> 
>
> Key: CASSANDRA-12988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12988
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Jason Brown
>Assignee: Josh McKenzie
>Priority: Low
> Fix For: 4.x
>
>
> Most reads for the auth-related tables execute at {{LOCAL_ONE}}. We'd like to 
> make it configurable, with the default still being {{LOCAL_ONE}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12988) make the consistency level for user-level auth reads and writes configurable

2021-09-17 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416793#comment-17416793
 ] 

Jeremiah Jordan edited comment on CASSANDRA-12988 at 9/17/21, 4:38 PM:
---

Yes, keeping QUORUM for that would solve the auto create issue.

-Just thought of another issue.  In the wild people will very often set the RF 
of the auth key space to be equal to the number of nodes in the DC.  I have 
seen people set it to 12 or even higher in a single DC.  They do this to make 
the LOCAL_ONE query able to always be to the current node, lowering the chances 
of auth failures from other nodes being slow.  Switching to always using 
LOCAL_QUORUM will go very badly in these cases.-

-I would suggest we need to be able to keep the LOCAL_ONE query as an option.-

Just re-read the patch and it kept the 
auth_read_consistency_level/auth_write_consistency_level settings to pick the 
level in the yaml, it seemed like that was not there from the JIRA comments.  
That should be fine.


was (Author: jjordan):
Yes, keeping QUORUM for that would solve the auto create issue.

-Just thought of another issue.  In the wild people will very often set the RF 
of the auth key space to be equal to the number of nodes in the DC.  I have 
seen people set it to 12 or even higher in a single DC.  They do this to make 
the LOCAL_ONE query able to always be to the current node, lowering the chances 
of auth failures from other nodes being slow.  Switching to always using 
LOCAL_QUORUM will go very badly in these cases.

I would suggest we need to be able to keep the LOCAL_ONE query as an option.-

Just re-read the patch and it kept the 
auth_read_consistency_level/auth_write_consistency_level settings to pick the 
level in the yaml, it seemed like that was not there from the JIRA comments.  
That should be fine.

> make the consistency level for user-level auth reads and writes configurable
> 
>
> Key: CASSANDRA-12988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12988
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Jason Brown
>Assignee: Josh McKenzie
>Priority: Low
> Fix For: 4.x
>
>
> Most reads for the auth-related tables execute at {{LOCAL_ONE}}. We'd like to 
> make it configurable, with the default still being {{LOCAL_ONE}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-12988) make the consistency level for user-level auth reads and writes configurable

2021-09-17 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416793#comment-17416793
 ] 

Jeremiah Jordan edited comment on CASSANDRA-12988 at 9/17/21, 4:38 PM:
---

Yes, keeping QUORUM for that would solve the auto create issue.

-Just thought of another issue.  In the wild people will very often set the RF 
of the auth key space to be equal to the number of nodes in the DC.  I have 
seen people set it to 12 or even higher in a single DC.  They do this to make 
the LOCAL_ONE query able to always be to the current node, lowering the chances 
of auth failures from other nodes being slow.  Switching to always using 
LOCAL_QUORUM will go very badly in these cases.

I would suggest we need to be able to keep the LOCAL_ONE query as an option.-

Just re-read the patch and it kept the 
auth_read_consistency_level/auth_write_consistency_level settings to pick the 
level in the yaml, it seemed like that was not there from the JIRA comments.  
That should be fine.


was (Author: jjordan):
Yes, keeping QUORUM for that would solve the auto create issue.

Just thought of another issue.  In the wild people will very often set the RF 
of the auth key space to be equal to the number of nodes in the DC.  I have 
seen people set it to 12 or even higher in a single DC.  They do this to make 
the LOCAL_ONE query able to always be to the current node, lowering the chances 
of auth failures from other nodes being slow.  Switching to always using 
LOCAL_QUORUM will go very badly in these cases.

I would suggest we need to be able to keep the LOCAL_ONE query as an option.

> make the consistency level for user-level auth reads and writes configurable
> 
>
> Key: CASSANDRA-12988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12988
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Jason Brown
>Assignee: Josh McKenzie
>Priority: Low
> Fix For: 4.x
>
>
> Most reads for the auth-related tables execute at {{LOCAL_ONE}}. We'd like to 
> make it configurable, with the default still being {{LOCAL_ONE}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-12988) make the consistency level for user-level auth reads and writes configurable

2021-09-17 Thread Jeremiah Jordan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416793#comment-17416793
 ] 

Jeremiah Jordan commented on CASSANDRA-12988:
-

Yes, keeping QUORUM for that would solve the auto create issue.

Just thought of another issue.  In the wild people will very often set the RF 
of the auth key space to be equal to the number of nodes in the DC.  I have 
seen people set it to 12 or even higher in a single DC.  They do this to make 
the LOCAL_ONE query able to always be to the current node, lowering the chances 
of auth failures from other nodes being slow.  Switching to always using 
LOCAL_QUORUM will go very badly in these cases.

I would suggest we need to be able to keep the LOCAL_ONE query as an option.

> make the consistency level for user-level auth reads and writes configurable
> 
>
> Key: CASSANDRA-12988
> URL: https://issues.apache.org/jira/browse/CASSANDRA-12988
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Core
>Reporter: Jason Brown
>Assignee: Josh McKenzie
>Priority: Low
> Fix For: 4.x
>
>
> Most reads for the auth-related tables execute at {{LOCAL_ONE}}. We'd like to 
> make it configurable, with the default still being {{LOCAL_ONE}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



  1   2   3   4   5   6   7   8   9   10   >