[jira] [Commented] (CASSANDRA-17625) Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from Cassandra dtests)

2022-06-07 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551395#comment-17551395
 ] 

Berenguer Blasi commented on CASSANDRA-17625:
-

lol Thx! Let's try to remember one or the other to revisit timeouts from a 
generic pov once 4.1 is out and CI is back to normal again.

> Test Failure: 
> dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from 
> Cassandra dtests)
> ---
>
> Key: CASSANDRA-17625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17625
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>
> Flaked a couple times on 4.1
> {code}
> Error Message
> cassandra.DriverException: Keyspace metadata was not refreshed. See log for 
> details.
> {code}
> https://ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/
> Nightlies archive if above dropped: 
> https://nightlies.apache.org/cassandra/ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17625) Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from Cassandra dtests)

2022-06-07 Thread Berenguer Blasi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Berenguer Blasi updated CASSANDRA-17625:

  Since Version: 4.1
Source Control Link: 
https://github.com/apache/cassandra-dtest/commit/511df040525543383a979e6d20e9ab150af7e7fe
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Test Failure: 
> dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from 
> Cassandra dtests)
> ---
>
> Key: CASSANDRA-17625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17625
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>
> Flaked a couple times on 4.1
> {code}
> Error Message
> cassandra.DriverException: Keyspace metadata was not refreshed. See log for 
> details.
> {code}
> https://ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/
> Nightlies archive if above dropped: 
> https://nightlies.apache.org/cassandra/ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-dtest] branch trunk updated: Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable

2022-06-07 Thread bereng
This is an automated email from the ASF dual-hosted git repository.

bereng pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra-dtest.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 511df040 Test Failure: 
dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable
511df040 is described below

commit 511df040525543383a979e6d20e9ab150af7e7fe
Author: Bereng 
AuthorDate: Wed May 18 11:09:45 2022 +0200

Test Failure: 
dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable

Patch by Berenguer Blasi; reviewed by Josh McKenzie for CASSANDRA-17625
---
 auth_test.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/auth_test.py b/auth_test.py
index 4bfe5003..f4ab2e87 100644
--- a/auth_test.py
+++ b/auth_test.py
@@ -64,7 +64,7 @@ class TestAuth(AbstractTestAuth):
 auth_metadata = UpdatingKeyspaceMetadataWrapper(
 cluster=session.cluster,
 ks_name='system_auth',
-max_schema_agreement_wait=30  # 3x the default of 10
+max_schema_agreement_wait=60  # 6x the default of 10
 )
 assert 1 == auth_metadata.replication_strategy.replication_factor
 


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17669) CentOS/RHEL installation requires JRE not available in Java 11

2022-06-07 Thread Berenguer Blasi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Berenguer Blasi updated CASSANDRA-17669:

Status: Ready to Commit  (was: Review In Progress)

> CentOS/RHEL installation requires JRE not available in Java 11
> --
>
> Key: CASSANDRA-17669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Erick Ramirez
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.1.x
>
>
> h2. Background
> A user [reported on Stack 
> Overflow|https://stackoverflow.com/questions/72377621/] and the DataStax 
> Developers [dtsx.io/discord|https://dtsx.io/discord] an issue with installing 
> Cassandra when only Java 11 is installed.
> h2. Symptoms
> Attempts to install Cassandra using YUM requires Java 8:
> {noformat}
> $ sudo yum install cassandra
> Dependencies resolved.
> 
>  Package  Architecture
> Version  Repository  
> Size
> 
> Installing:
>  cassandranoarch  
> 4.0.4-1  cassandra   
> 45 M
> Installing dependencies:
>  java-1.8.0-openjdk   x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream  
> 341 k
>  java-1.8.0-openjdk-headless  x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream   
> 34 M
> Installing weak dependencies:
>  gtk2 x86_64  
> 2.24.32-5.el8appstream  
> 3.4 M
> Transaction Summary
> 
> Install  4 Packages
> {noformat}
> Similarly, attempts to install the RPM results in:
> {noformat}
> $ sudo rpm -i cassandra-4.0.4-1.noarch.rpm 
> warning: cassandra-4.0.4-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 
> 7e3e87cb: NOKEY
> error: Failed dependencies:
>   jre >= 1.8.0 is needed by cassandra-4.0.4-1.noarch{noformat}
> h2. Root cause
> Package installs on CentOS and RHEL platforms has [a dependency on JRE 
> 1.8+|https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L49]:
> {noformat}
> Requires:  jre >= 1.8.0{noformat}
> However, JRE is no longer available in Java 11. From the [JDK 11 release 
> notes|https://www.oracle.com/java/technologies/javase/11-relnote-issues.html]:
> {quote}In this release, the JRE or Server JRE is no longer offered. Only the 
> JDK is offered.
> {quote}
> h2. Workaround
> Override the dependency check when installing the RPM with the {{--nodeps}} 
> option:
> {noformat}
> $ sudo rpm --nodeps -i cassandra-4.0.4-1.noarch.rpm {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17669) CentOS/RHEL installation requires JRE not available in Java 11

2022-06-07 Thread Berenguer Blasi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Berenguer Blasi updated CASSANDRA-17669:

Reviewers: Berenguer Blasi, Berenguer Blasi
   Berenguer Blasi, Berenguer Blasi  (was: Berenguer Blasi)
   Status: Review In Progress  (was: Patch Available)

> CentOS/RHEL installation requires JRE not available in Java 11
> --
>
> Key: CASSANDRA-17669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Erick Ramirez
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.1.x
>
>
> h2. Background
> A user [reported on Stack 
> Overflow|https://stackoverflow.com/questions/72377621/] and the DataStax 
> Developers [dtsx.io/discord|https://dtsx.io/discord] an issue with installing 
> Cassandra when only Java 11 is installed.
> h2. Symptoms
> Attempts to install Cassandra using YUM requires Java 8:
> {noformat}
> $ sudo yum install cassandra
> Dependencies resolved.
> 
>  Package  Architecture
> Version  Repository  
> Size
> 
> Installing:
>  cassandranoarch  
> 4.0.4-1  cassandra   
> 45 M
> Installing dependencies:
>  java-1.8.0-openjdk   x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream  
> 341 k
>  java-1.8.0-openjdk-headless  x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream   
> 34 M
> Installing weak dependencies:
>  gtk2 x86_64  
> 2.24.32-5.el8appstream  
> 3.4 M
> Transaction Summary
> 
> Install  4 Packages
> {noformat}
> Similarly, attempts to install the RPM results in:
> {noformat}
> $ sudo rpm -i cassandra-4.0.4-1.noarch.rpm 
> warning: cassandra-4.0.4-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 
> 7e3e87cb: NOKEY
> error: Failed dependencies:
>   jre >= 1.8.0 is needed by cassandra-4.0.4-1.noarch{noformat}
> h2. Root cause
> Package installs on CentOS and RHEL platforms has [a dependency on JRE 
> 1.8+|https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L49]:
> {noformat}
> Requires:  jre >= 1.8.0{noformat}
> However, JRE is no longer available in Java 11. From the [JDK 11 release 
> notes|https://www.oracle.com/java/technologies/javase/11-relnote-issues.html]:
> {quote}In this release, the JRE or Server JRE is no longer offered. Only the 
> JDK is offered.
> {quote}
> h2. Workaround
> Override the dependency check when installing the RPM with the {{--nodeps}} 
> option:
> {noformat}
> $ sudo rpm --nodeps -i cassandra-4.0.4-1.noarch.rpm {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17669) CentOS/RHEL installation requires JRE not available in Java 11

2022-06-07 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551391#comment-17551391
 ] 

Berenguer Blasi commented on CASSANDRA-17669:
-

Ok LGTM +1

> CentOS/RHEL installation requires JRE not available in Java 11
> --
>
> Key: CASSANDRA-17669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Erick Ramirez
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.1.x
>
>
> h2. Background
> A user [reported on Stack 
> Overflow|https://stackoverflow.com/questions/72377621/] and the DataStax 
> Developers [dtsx.io/discord|https://dtsx.io/discord] an issue with installing 
> Cassandra when only Java 11 is installed.
> h2. Symptoms
> Attempts to install Cassandra using YUM requires Java 8:
> {noformat}
> $ sudo yum install cassandra
> Dependencies resolved.
> 
>  Package  Architecture
> Version  Repository  
> Size
> 
> Installing:
>  cassandranoarch  
> 4.0.4-1  cassandra   
> 45 M
> Installing dependencies:
>  java-1.8.0-openjdk   x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream  
> 341 k
>  java-1.8.0-openjdk-headless  x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream   
> 34 M
> Installing weak dependencies:
>  gtk2 x86_64  
> 2.24.32-5.el8appstream  
> 3.4 M
> Transaction Summary
> 
> Install  4 Packages
> {noformat}
> Similarly, attempts to install the RPM results in:
> {noformat}
> $ sudo rpm -i cassandra-4.0.4-1.noarch.rpm 
> warning: cassandra-4.0.4-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 
> 7e3e87cb: NOKEY
> error: Failed dependencies:
>   jre >= 1.8.0 is needed by cassandra-4.0.4-1.noarch{noformat}
> h2. Root cause
> Package installs on CentOS and RHEL platforms has [a dependency on JRE 
> 1.8+|https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L49]:
> {noformat}
> Requires:  jre >= 1.8.0{noformat}
> However, JRE is no longer available in Java 11. From the [JDK 11 release 
> notes|https://www.oracle.com/java/technologies/javase/11-relnote-issues.html]:
> {quote}In this release, the JRE or Server JRE is no longer offered. Only the 
> JDK is offered.
> {quote}
> h2. Workaround
> Override the dependency check when installing the RPM with the {{--nodeps}} 
> option:
> {noformat}
> $ sudo rpm --nodeps -i cassandra-4.0.4-1.noarch.rpm {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-17687) Remove "--frames" option when generating javadoc

2022-06-07 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned CASSANDRA-17687:
-

Assignee: Zili Chen

> Remove "--frames" option when generating javadoc
> 
>
> Key: CASSANDRA-17687
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17687
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Zili Chen
>Assignee: Zili Chen
>Priority: Normal
>
> JDK17 doesn't support this option and it seems not quite necessary. For 
> forward compatibility I propose we can remove this option.
> Related JDK issue: [https://bugs.openjdk.org/browse/JDK-8215599]
> I volunteer to prepare a patch if this is in a good direction.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17687) Remove "--frames" option when generating javadoc

2022-06-07 Thread Zili Chen (Jira)
Zili Chen created CASSANDRA-17687:
-

 Summary: Remove "--frames" option when generating javadoc
 Key: CASSANDRA-17687
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17687
 Project: Cassandra
  Issue Type: Improvement
Reporter: Zili Chen


JDK17 doesn't support this option and it seems not quite necessary. For forward 
compatibility I propose we can remove this option.

Related JDK issue: [https://bugs.openjdk.org/browse/JDK-8215599]

I volunteer to prepare a patch if this is in a good direction.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17411) Network partition causes write ONE timeouts when using counters in Cassandra 4

2022-06-07 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-17411:
-
Status: Open  (was: Patch Available)

> Network partition causes write ONE timeouts when using counters in Cassandra 4
> --
>
> Key: CASSANDRA-17411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17411
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Pere Balaguer
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.x
>
> Attachments: app.py
>
>
> h5. Affected versions:
>  * 4.x
> h5. Observed behavior:
> When executing CL=ONE writes on a table with a counter column, if one of the 
> nodes is network partitioned from the others, clients keep sending requests 
> to it.
> Even though this may be a "driver" problem, I've been able to reproduce it 
> with both java and python datastax drivers using their latest available 
> versions and given the behavior only changes depending on the Cassandra 
> version, well, here I am.
> h5. Expected behavior:
> In Cassandra 3 after all inflight requests fail (expected), no new requests 
> are sent to the partitioned node. The expectation is that Cassandra 4 behaves 
> the same way.
> h5. How to reproduce:
> {noformat}
> # Create a cluster with the desired version, will go with 4.x for this example
> ccm create bug-report -v 4.0.3
> ccm populate -n 2:2:2
> ccm start
> # Create schemas and so on
> CQL=$(cat < CONSISTENCY ALL;
> DROP KEYSPACE IF EXISTS demo;
> CREATE KEYSPACE demo WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 2, 'dc2': 2, 'dc3': 2};
> CREATE TABLE demo.demo (pk uuid PRIMARY KEY, count counter) WITH compaction = 
> {'class': 'LeveledCompactionStrategy'};
> END
> )
> ccm node1 cqlsh --verbose --exec="${CQL}"
> # Launch the attached app.py
> # requires cassandra-driver
> python3 app.py "127.0.0.1" "9042"
> # Wait a bit for the app to settle, proceed to next step once you see 3 
> messages in stdout like:
> # 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 
> (0.00) timeouts/total_rqs in the last 1 minute
> # Partition one node with iptables
> iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 7000 -j 
> DROP; iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 
> 9042 -j DROP
> {noformat}
> Some time after executing the iptables command in cassandra-3 the output 
> should be similar to:
> {noformat}
> 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:51,576 - target-dc3 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:58,032 - target-dc1 - __main__ - INFO - Got 6/252 (2.380952) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,560 - target-dc2 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,620 - target-dc3 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:58,101 - target-dc1 - __main__ - INFO - Got 2/354 (0.564972) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,602 - target-dc2 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,672 - target-dc3 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:58,153 - target-dc1 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> {noformat}
> as the timeouts/rqs shows, in about 2 minutes the partitioned node stops 
> receiving traffic
> while as in cassandra-4
> {noformat}
> 2022-03-01 15:49:39,068 - target-dc3 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:39,107 - target-dc2 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:41,206 - target-dc1 - __main__ - INFO - Got 2/444 (0.450450) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,095 - target-dc3 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,148 - target-dc2 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:42,589 - target-dc1 - __main__ - INFO - Got 7/13 (53.846154) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,125 - target-dc3 - __main__ - INFO - Got 0/567 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,159 - target-dc2 - __main__ - INFO - Got 0/567 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 

[jira] [Commented] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551315#comment-17551315
 ] 

Andres de la Peña commented on CASSANDRA-17584:
---

It seems this has been committed to 4.1 without running the pre-commit tests, 
nor additional multiplexer runs for the modified test. I don't see any CI 
results for trunk either. Here are the missed runs:
||Branch||CI||
|4.1|[j8|https://app.circleci.com/pipelines/github/adelapena/cassandra/1677/workflows/575d0a97-5568-4779-be06-ff8164f60c8e]
 
[j11|https://app.circleci.com/pipelines/github/adelapena/cassandra/1677/workflows/9fb4e85e-fa97-4a6c-8c2c-9b5cc3ae7f05]|
|trunk|[j8|https://app.circleci.com/pipelines/github/adelapena/cassandra/1678/workflows/1dd1dc71-297d-4b55-8dca-9813dca4f985]
 
[j11|https://app.circleci.com/pipelines/github/adelapena/cassandra/1678/workflows/a70374ab-b888-4931-9ec8-0940be3bfd75]|

 

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17500) Create Maximum Keyspace Replication Factor Guardrail

2022-06-07 Thread Savni Nagarkar (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551303#comment-17551303
 ] 

Savni Nagarkar commented on CASSANDRA-17500:


||Github||Circle CI||
|[pr|https://github.com/apache/cassandra/pull/1582]|[j8|https://app.circleci.com/pipelines/github/thingtwin1/cassandra/76/workflows/bab530a7-460d-4107-a243-ed260a8887ab]
 
[j11|https://app.circleci.com/pipelines/github/thingtwin1/cassandra/76/workflows/d9bfe461-d326-40ac-aa0c-90ee39d6a291]|

> Create Maximum Keyspace Replication Factor Guardrail 
> -
>
> Key: CASSANDRA-17500
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17500
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Guardrails
>Reporter: Savni Nagarkar
>Assignee: Savni Nagarkar
>Priority: Normal
> Fix For: 4.x
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> This ticket adds a maximum replication factor guardrail to ensure safety when 
> creating or altering key spaces. The replication factor will be applied per 
> data center. The ticket was prompted as a user set the replication factor 
> equal to the number of nodes in the cluster. The property will be added to 
> guardrails to ensure consistency.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-17584:
--
Fix Version/s: 4.1-beta
   (was: 4.1)

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-17584:
--
Fix Version/s: 4.2
   (was: 4.x)

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1, 4.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-17584:
--
  Fix Version/s: 4.1
 (was: 4.1-beta)
  Since Version: 4.1-alpha1
Source Control Link: 
https://github.com/apache/cassandra/commit/457e16c27ee65063fa15963c58bea3e9a63c5aa5
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1, 4.x
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Stefan Miklosovic (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Miklosovic updated CASSANDRA-17584:
--
Status: Ready to Commit  (was: Review In Progress)

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-4.1 updated (9b4784bdb7 -> 457e16c27e)

2022-06-07 Thread smiklosovic
This is an automated email from the ASF dual-hosted git repository.

smiklosovic pushed a change to branch cassandra-4.1
in repository https://gitbox.apache.org/repos/asf/cassandra.git


from 9b4784bdb7 Fix missed nowInSec values in QueryProcessor
 add 457e16c27e fix flaky GossipInfoTableTest

No new revisions were added by this update.

Summary of changes:
 .../cassandra/db/virtual/GossipInfoTable.java  | 18 +-
 .../cassandra/db/virtual/GossipInfoTableTest.java  | 64 +++---
 2 files changed, 49 insertions(+), 33 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (99d034a224 -> 1b3aec8eef)

2022-06-07 Thread smiklosovic
This is an automated email from the ASF dual-hosted git repository.

smiklosovic pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


from 99d034a224 Option to disable CDC on SSTable repair
 add 457e16c27e fix flaky GossipInfoTableTest
 add 1b3aec8eef Merge branch 'cassandra-4.1' into trunk

No new revisions were added by this update.

Summary of changes:
 .../cassandra/db/virtual/GossipInfoTable.java  | 18 +-
 .../cassandra/db/virtual/GossipInfoTableTest.java  | 64 +++---
 2 files changed, 49 insertions(+), 33 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17411) Network partition causes write ONE timeouts when using counters in Cassandra 4

2022-06-07 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-17411:
-
Test and Documentation Plan: run CI
 Status: Patch Available  (was: Open)

> Network partition causes write ONE timeouts when using counters in Cassandra 4
> --
>
> Key: CASSANDRA-17411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17411
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Pere Balaguer
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.x
>
> Attachments: app.py
>
>
> h5. Affected versions:
>  * 4.x
> h5. Observed behavior:
> When executing CL=ONE writes on a table with a counter column, if one of the 
> nodes is network partitioned from the others, clients keep sending requests 
> to it.
> Even though this may be a "driver" problem, I've been able to reproduce it 
> with both java and python datastax drivers using their latest available 
> versions and given the behavior only changes depending on the Cassandra 
> version, well, here I am.
> h5. Expected behavior:
> In Cassandra 3 after all inflight requests fail (expected), no new requests 
> are sent to the partitioned node. The expectation is that Cassandra 4 behaves 
> the same way.
> h5. How to reproduce:
> {noformat}
> # Create a cluster with the desired version, will go with 4.x for this example
> ccm create bug-report -v 4.0.3
> ccm populate -n 2:2:2
> ccm start
> # Create schemas and so on
> CQL=$(cat < CONSISTENCY ALL;
> DROP KEYSPACE IF EXISTS demo;
> CREATE KEYSPACE demo WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 2, 'dc2': 2, 'dc3': 2};
> CREATE TABLE demo.demo (pk uuid PRIMARY KEY, count counter) WITH compaction = 
> {'class': 'LeveledCompactionStrategy'};
> END
> )
> ccm node1 cqlsh --verbose --exec="${CQL}"
> # Launch the attached app.py
> # requires cassandra-driver
> python3 app.py "127.0.0.1" "9042"
> # Wait a bit for the app to settle, proceed to next step once you see 3 
> messages in stdout like:
> # 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 
> (0.00) timeouts/total_rqs in the last 1 minute
> # Partition one node with iptables
> iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 7000 -j 
> DROP; iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 
> 9042 -j DROP
> {noformat}
> Some time after executing the iptables command in cassandra-3 the output 
> should be similar to:
> {noformat}
> 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:51,576 - target-dc3 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:58,032 - target-dc1 - __main__ - INFO - Got 6/252 (2.380952) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,560 - target-dc2 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,620 - target-dc3 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:58,101 - target-dc1 - __main__ - INFO - Got 2/354 (0.564972) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,602 - target-dc2 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,672 - target-dc3 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:58,153 - target-dc1 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> {noformat}
> as the timeouts/rqs shows, in about 2 minutes the partitioned node stops 
> receiving traffic
> while as in cassandra-4
> {noformat}
> 2022-03-01 15:49:39,068 - target-dc3 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:39,107 - target-dc2 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:41,206 - target-dc1 - __main__ - INFO - Got 2/444 (0.450450) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,095 - target-dc3 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,148 - target-dc2 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:42,589 - target-dc1 - __main__ - INFO - Got 7/13 (53.846154) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,125 - target-dc3 - __main__ - INFO - Got 0/567 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,159 - target-dc2 - __main__ - INFO - Got 0/567 

[jira] [Comment Edited] (CASSANDRA-17411) Network partition causes write ONE timeouts when using counters in Cassandra 4

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551287#comment-17551287
 ] 

Brandon Williams edited comment on CASSANDRA-17411 at 6/7/22 9:02 PM:
--

I believe transient replication is what broke this [in this 
commit|https://github.com/apache/cassandra/commit/f7431b432875e334170ccdb19934d05545d2cebd]
 and never filtered out dead replicas when selecting one for counter writes.

||Branch||CI||
|[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/516/workflows/863bae3c-30a5-4c1d-9753-60dd13053bb5],
 
[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/516/workflows/f5ad023e-17a6-4aa7-805c-ee4d0176aeb1]|
|[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/518/workflows/ce903eeb-0a35-40e0-bf79-d70eccae1117],
 
[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/518/workflows/ed60108d-5c53-4fc7-9633-f797472bb103]|
|[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/517/workflows/80030dc5-78a2-4539-be37-b34ce0e675ed],
 
[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/517/workflows/1284ea83-781b-4638-a3fd-4f7a0a88e71e]|


was (Author: brandon.williams):
I believe transient replication is what broke this [in this 
commit|https://github.com/apache/cassandra/commit/f7431b432875e334170ccdb19934d05545d2cebd]
 and never filtered out dead replicas when selecting one for counter writes.

||Branch||CI||
|[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/516/workflows/863bae3c-30a5-4c1d-9753-60dd13053bb5],[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/516/workflows/f5ad023e-17a6-4aa7-805c-ee4d0176aeb1]|
|[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/518/workflows/ce903eeb-0a35-40e0-bf79-d70eccae1117],
 
[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/518/workflows/ed60108d-5c53-4fc7-9633-f797472bb103]|
|[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/517/workflows/80030dc5-78a2-4539-be37-b34ce0e675ed],[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/517/workflows/1284ea83-781b-4638-a3fd-4f7a0a88e71e]|

> Network partition causes write ONE timeouts when using counters in Cassandra 4
> --
>
> Key: CASSANDRA-17411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17411
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Pere Balaguer
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.x
>
> Attachments: app.py
>
>
> h5. Affected versions:
>  * 4.x
> h5. Observed behavior:
> When executing CL=ONE writes on a table with a counter column, if one of the 
> nodes is network partitioned from the others, clients keep sending requests 
> to it.
> Even though this may be a "driver" problem, I've been able to reproduce it 
> with both java and python datastax drivers using their latest available 
> versions and given the behavior only changes depending on the Cassandra 
> version, well, here I am.
> h5. Expected behavior:
> In Cassandra 3 after all inflight requests fail (expected), no new requests 
> are sent to the partitioned node. The expectation is that Cassandra 4 behaves 
> the same way.
> h5. How to reproduce:
> {noformat}
> # Create a cluster with the desired version, will go with 4.x for this example
> ccm create bug-report -v 4.0.3
> ccm populate -n 2:2:2
> ccm start
> # Create schemas and so on
> CQL=$(cat < CONSISTENCY ALL;
> DROP KEYSPACE IF EXISTS demo;
> CREATE KEYSPACE demo WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 2, 'dc2': 2, 'dc3': 2};
> CREATE TABLE demo.demo (pk uuid PRIMARY KEY, count counter) WITH compaction = 
> {'class': 'LeveledCompactionStrategy'};
> END
> )
> ccm node1 cqlsh --verbose --exec="${CQL}"
> # Launch the attached app.py
> # requires cassandra-driver
> python3 app.py "127.0.0.1" "9042"
> # Wait a bit for the app to settle, proceed to next step once you see 3 
> messages in stdout like:
> # 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 
> (0.00) timeouts/total_rqs in the last 1 minute
> # Partition one node with iptables
> iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 7000 -j 
> DROP; iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 

[jira] [Commented] (CASSANDRA-17411) Network partition causes write ONE timeouts when using counters in Cassandra 4

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551287#comment-17551287
 ] 

Brandon Williams commented on CASSANDRA-17411:
--

I believe transient replication is what broke this [in this 
commit|https://github.com/apache/cassandra/commit/f7431b432875e334170ccdb19934d05545d2cebd]
 and never filtered out dead replicas when selecting one for counter writes.

||Branch||CI||
|[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/516/workflows/863bae3c-30a5-4c1d-9753-60dd13053bb5],[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/516/workflows/f5ad023e-17a6-4aa7-805c-ee4d0176aeb1]|
|[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/518/workflows/ce903eeb-0a35-40e0-bf79-d70eccae1117],
 
[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/518/workflows/ed60108d-5c53-4fc7-9633-f797472bb103]|
|[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-17411-trunk]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/517/workflows/80030dc5-78a2-4539-be37-b34ce0e675ed],[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/517/workflows/1284ea83-781b-4638-a3fd-4f7a0a88e71e]|

> Network partition causes write ONE timeouts when using counters in Cassandra 4
> --
>
> Key: CASSANDRA-17411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17411
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Pere Balaguer
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.x
>
> Attachments: app.py
>
>
> h5. Affected versions:
>  * 4.x
> h5. Observed behavior:
> When executing CL=ONE writes on a table with a counter column, if one of the 
> nodes is network partitioned from the others, clients keep sending requests 
> to it.
> Even though this may be a "driver" problem, I've been able to reproduce it 
> with both java and python datastax drivers using their latest available 
> versions and given the behavior only changes depending on the Cassandra 
> version, well, here I am.
> h5. Expected behavior:
> In Cassandra 3 after all inflight requests fail (expected), no new requests 
> are sent to the partitioned node. The expectation is that Cassandra 4 behaves 
> the same way.
> h5. How to reproduce:
> {noformat}
> # Create a cluster with the desired version, will go with 4.x for this example
> ccm create bug-report -v 4.0.3
> ccm populate -n 2:2:2
> ccm start
> # Create schemas and so on
> CQL=$(cat < CONSISTENCY ALL;
> DROP KEYSPACE IF EXISTS demo;
> CREATE KEYSPACE demo WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 2, 'dc2': 2, 'dc3': 2};
> CREATE TABLE demo.demo (pk uuid PRIMARY KEY, count counter) WITH compaction = 
> {'class': 'LeveledCompactionStrategy'};
> END
> )
> ccm node1 cqlsh --verbose --exec="${CQL}"
> # Launch the attached app.py
> # requires cassandra-driver
> python3 app.py "127.0.0.1" "9042"
> # Wait a bit for the app to settle, proceed to next step once you see 3 
> messages in stdout like:
> # 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 
> (0.00) timeouts/total_rqs in the last 1 minute
> # Partition one node with iptables
> iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 7000 -j 
> DROP; iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 
> 9042 -j DROP
> {noformat}
> Some time after executing the iptables command in cassandra-3 the output 
> should be similar to:
> {noformat}
> 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:51,576 - target-dc3 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:58,032 - target-dc1 - __main__ - INFO - Got 6/252 (2.380952) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,560 - target-dc2 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,620 - target-dc3 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:58,101 - target-dc1 - __main__ - INFO - Got 2/354 (0.564972) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,602 - target-dc2 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,672 - target-dc3 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:58,153 - target-dc1 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 

[jira] [Commented] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551266#comment-17551266
 ] 

Stefan Miklosovic commented on CASSANDRA-17584:
---

https://app.circleci.com/pipelines/github/instaclustr/cassandra/1057/workflows/f0087467-b7ab-4055-be7d-43fb3aee0198

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551248#comment-17551248
 ] 

David Capwell commented on CASSANDRA-16844:
---

bq. FTR if it comes to revert or break people, break people

4.1 is a minor release and this is a feature and not a critical bug fix... 
breaking people in a minor goes against our previous promises

bq. 1) Could be reverting in 4.1 and keeping it in trunk for the next major, as 
originally intended. Probably with a note in NEWS.txt.

Cool with me, but we would need to settle the "what is trunk" debate... right 
now trunk is 4.x and we don't have a 5.x, so we need to fork (or saying next 
release is 5.0 so adding breaking changes 2y after 4.0, which feels very very 
quick to me)

bq. 2) Seems a bit cumbersome, and we would probably want to get rid of the 
flag on trunk.

Same as previous point, in 5.0 we can drop the flag sure, we just don't have a 
5.0 atm.  We could also add a yaml property to control the default behavior, so 
users who wish to have the 5.0 logic can avoid the flag and rely on the yaml, 
but that has the same issue as we would drop in 5.0 (though we have ways to 
deal with that atm)

bq. 3) Moving to the end makes compatibility better in most cases but it 
doesn't guarantee it.

Agree, it doesn't mean no one will break, its just hoping we break less people. 
 One could argue its the same as adding a column to a table, as users doing 
"select *" would now notice them as well.

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551241#comment-17551241
 ] 

Brandon Williams commented on CASSANDRA-16844:
--

FTR if it comes to revert or break people, break people.  Progress can't stop 
for awk users, we have to help the greater good.

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551242#comment-17551242
 ] 

Andres de la Peña commented on CASSANDRA-16844:
---

1) Could be reverting in 4.1 and keeping it in trunk for the next major, as 
originally intended. Probably with a note in NEWS.txt.

2) Seems a bit cumbersome, and we would probably want to get rid of the flag on 
trunk.

3) Moving to the end makes compatibility better in most cases but it doesn't 
guarantee it.

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551240#comment-17551240
 ] 

Brandon Williams commented on CASSANDRA-16844:
--

I'm fine with 3 in that case.

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551238#comment-17551238
 ] 

David Capwell commented on CASSANDRA-16844:
---

I did mean revert, was reacting to this comment

bq.  I think that probably the only way to guarantee compatibility with 3rd 
party tools parsing text output is not making changes at all

added an option 3 to move to the end and hope that doesn't break people

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551237#comment-17551237
 ] 

Brandon Williams commented on CASSANDRA-16844:
--

1) meaning reorder?  I'm fine with that, presuming that's what you mean.

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551234#comment-17551234
 ] 

David Capwell edited comment on CASSANDRA-16844 at 6/7/22 7:02 PM:
---

So how do people feel about moving forward?  I see the following options

1) revert
2) add behind a feature flag (something like nodetool compactionstats 
--show-sstables)
3) move to the end and hope for the best

thoughts?


was (Author: dcapwell):
So how do people feel about moving forward?  I see the following options

1) revert
2) add behind a feature flag (something like nodetool compactionstats 
--show-sstables)

thoughts?

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551234#comment-17551234
 ] 

David Capwell commented on CASSANDRA-16844:
---

So how do people feel about moving forward?  I see the following options

1) revert
2) add behind a feature flag (something like nodetool compactionstats 
--show-sstables)

thoughts?

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17666) Option to disable write path during streaming for CDC enabled tables

2022-06-07 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRA-17666:
--
  Fix Version/s: 4.2
Source Control Link: 
https://github.com/apache/cassandra/commit/99d034a2245c44becb6a730c77ad51ab9340f3a7
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed into trunk as 
[99d034a|https://github.com/apache/cassandra/commit/99d034a2245c44becb6a730c77ad51ab9340f3a7]

> Option to disable write path during streaming for CDC enabled tables
> 
>
> Key: CASSANDRA-17666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17666
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Change Data Capture
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
> Fix For: 4.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For the CDC-enabled tables, a special write path is employed during 
> streaming. The mutations streamed are written into commit log first. 
> There are scenarios that the commit logs can accumulate, which lead to 
> failure of streaming and blocking writes. 
> I'd like to propose adding a dynamic toggle to disable the special write path 
> for CDC during streaming. 
> Please note that the toggle is a trade-off. Because the special write path is 
> there in the hope to ensure data consistency. Turning it off allows the 
> streaming to pass, but in some extreme scenarios, the downstream CDC 
> consumers may have holes in the stream, depending on how they consumes the 
> commit logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated: Option to disable CDC on SSTable repair

2022-06-07 Thread ycai
This is an automated email from the ASF dual-hosted git repository.

ycai pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 99d034a224 Option to disable CDC on SSTable repair
99d034a224 is described below

commit 99d034a2245c44becb6a730c77ad51ab9340f3a7
Author: Yifan Cai 
AuthorDate: Mon Jun 6 13:15:33 2022 -0700

Option to disable CDC on SSTable repair

patch by Yifan Cai; reviewed by Josh McKenzie for CASSANDRA-17666
---
 CHANGES.txt|  1 +
 NEWS.txt   |  6 ++
 conf/cassandra.yaml| 12 +++
 src/java/org/apache/cassandra/config/Config.java   |  3 +
 .../cassandra/config/DatabaseDescriptor.java   | 10 +++
 .../apache/cassandra/db/commitlog/CommitLog.java   | 29 +--
 .../cassandra/db/commitlog/CommitLogMBean.java |  6 ++
 .../db/streaming/CassandraStreamReceiver.java  | 25 --
 .../test/cdc/ToggleCDCOnRepairEnabledTest.java | 97 ++
 9 files changed, 175 insertions(+), 14 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index d6b4ff5ab9..9e31bd96e6 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.2
+ * Option to disable CDC writes of repaired data (CASSANDRA-17666)
  * When a node is bootstrapping it gets the whole gossip state but applies in 
random order causing some cases where StorageService will fail causing an 
instance to not show up in TokenMetadata (CASSANDRA-17676)
  * Add CQLSH command SHOW REPLICAS (CASSANDRA-17577)
  * Add guardrail to allow disabling of SimpleStrategy (CASSANDRA-17647)
diff --git a/NEWS.txt b/NEWS.txt
index 5a52c6e3ba..996113d7c7 100644
--- a/NEWS.txt
+++ b/NEWS.txt
@@ -57,6 +57,12 @@ using the provided 'sstableupgrade' tool.
 
 New features
 
+- Added a new configuration cdc_on_repair_enabled to toggle whether CDC 
mutations are replayed through the
+  write path on streaming, e.g. repair. When enabled, CDC data streamed to 
the destination node will be written into
+  commit log first. When disabled, the streamed CDC data is written into 
SSTables just the same as normal streaming.
+  If this is set to false, streaming will be considerably faster however 
it's possible that, in extreme situations
+  (losing > quorum # nodes in a replica set), you may have data in your 
SSTables that never makes it to the CDC log.
+  The default is true/enabled. The configuration can be altered via JMX.
 - Added a new CQL function, maxwritetime. It shows the largest unix 
timestamp that the data was written, similar to
   its sibling CQL function, writetime. Unlike writetime, maxwritetime can 
be applied to multi-cell data types, e.g.
   non-frozen collections and UDT, and returns the largest timestamp. One 
should not to use it when upgrading to 4.2.
diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml
index 491740f012..3bab6712c8 100644
--- a/conf/cassandra.yaml
+++ b/conf/cassandra.yaml
@@ -298,6 +298,18 @@ partitioner: org.apache.cassandra.dht.Murmur3Partitioner
 # containing a CDC-enabled table if at space limit in cdc_raw_directory).
 cdc_enabled: false
 
+# Specify whether writes to the CDC-enabled tables should be blocked when CDC 
data on disk has reached to the limit.
+# When setting to false, the writes will not be blocked and the oldest CDC 
data on disk will be deleted to
+# ensure the size constraint. The default is true.
+# cdc_block_writes: true
+
+# Specify whether CDC mutations are replayed through the write path on 
streaming, e.g. repair.
+# When enabled, CDC data streamed to the destination node will be written into 
commit log first. When setting to false,
+# the streamed CDC data is written into SSTables just the same as normal 
streaming. The default is true.
+# If this is set to false, streaming will be considerably faster however it's 
possible that, in extreme situations
+# (losing > quorum # nodes in a replica set), you may have data in your 
SSTables that never makes it to the CDC log.
+# cdc_on_repair_enabled: true
+
 # CommitLogSegments are moved to this directory on flush if cdc_enabled: true 
and the
 # segment contains mutations for a CDC-enabled table. This should be placed on 
a
 # separate spindle than the data directories. If not set, the default 
directory is
diff --git a/src/java/org/apache/cassandra/config/Config.java 
b/src/java/org/apache/cassandra/config/Config.java
index c3c5b3582c..3d2dbb7b40 100644
--- a/src/java/org/apache/cassandra/config/Config.java
+++ b/src/java/org/apache/cassandra/config/Config.java
@@ -380,6 +380,9 @@ public class Config
 // When true, new CDC mutations are rejected/blocked when reaching max CDC 
storage.
 // When false, new CDC mutations can always be added. But it will remove 
the oldest CDC commit log segment on full.
 public volatile boolean 

[jira] [Commented] (CASSANDRA-17677) Fix BulkLoader to load entireSSTableThrottle and entireSSTableInterDcThrottle

2022-06-07 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551218#comment-17551218
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17677:
-

Marking as needs committer as David won’t have the chance to take a look at 
this one in the next two weeks. 

> Fix BulkLoader to load  entireSSTableThrottle and entireSSTableInterDcThrottle
> --
>
> Key: CASSANDRA-17677
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17677
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/bulk load
>Reporter: Ekaterina Dimitrova
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{entire_sstable_stream_throughput_outbound and 
> entire_sstable_inter_dc_stream_throughput_outbound}} were introduced in 
> CASSANDRA-17065.They were added to the LoaderOptions class but they are not 
> loaded in BulkLoader as {{throttle}} and {{interDcThrottle are. }}{{As part 
> of this ticket we need to fix the BulkLoader, also those properties should be 
> advertised as MiB/s, not megabits/s. This was not changed in CASSANDRA-15234 
> for the bulk loader because those are not loaded and those variables in 
> LoaderOptions are disconnected from the Cassandra config parameters and 
> unused at the moment. }}
> It will be good also to update the doc here - 
> [https://cassandra.apache.org/doc/latest/cassandra/operating/bulk_loading.html,|https://cassandra.apache.org/doc/latest/cassandra/operating/bulk_loading.html]
> {{and add a test that those are loaded properly when used with the 
> BulkLoader. }}
> {{CC [~frankgh] }}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17677) Fix BulkLoader to load entireSSTableThrottle and entireSSTableInterDcThrottle

2022-06-07 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-17677:

Status: Needs Committer  (was: Review In Progress)

> Fix BulkLoader to load  entireSSTableThrottle and entireSSTableInterDcThrottle
> --
>
> Key: CASSANDRA-17677
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17677
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/bulk load
>Reporter: Ekaterina Dimitrova
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{entire_sstable_stream_throughput_outbound and 
> entire_sstable_inter_dc_stream_throughput_outbound}} were introduced in 
> CASSANDRA-17065.They were added to the LoaderOptions class but they are not 
> loaded in BulkLoader as {{throttle}} and {{interDcThrottle are. }}{{As part 
> of this ticket we need to fix the BulkLoader, also those properties should be 
> advertised as MiB/s, not megabits/s. This was not changed in CASSANDRA-15234 
> for the bulk loader because those are not loaded and those variables in 
> LoaderOptions are disconnected from the Cassandra config parameters and 
> unused at the moment. }}
> It will be good also to update the doc here - 
> [https://cassandra.apache.org/doc/latest/cassandra/operating/bulk_loading.html,|https://cassandra.apache.org/doc/latest/cassandra/operating/bulk_loading.html]
> {{and add a test that those are loaded properly when used with the 
> BulkLoader. }}
> {{CC [~frankgh] }}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17666) Option to disable write path during streaming for CDC enabled tables

2022-06-07 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRA-17666:
--
Status: Ready to Commit  (was: Review In Progress)

> Option to disable write path during streaming for CDC enabled tables
> 
>
> Key: CASSANDRA-17666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17666
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Change Data Capture
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For the CDC-enabled tables, a special write path is employed during 
> streaming. The mutations streamed are written into commit log first. 
> There are scenarios that the commit logs can accumulate, which lead to 
> failure of streaming and blocking writes. 
> I'd like to propose adding a dynamic toggle to disable the special write path 
> for CDC during streaming. 
> Please note that the toggle is a trade-off. Because the special write path is 
> there in the hope to ensure data consistency. Turning it off allows the 
> streaming to pass, but in some extreme scenarios, the downstream CDC 
> consumers may have holes in the stream, depending on how they consumes the 
> commit logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17666) Option to disable write path during streaming for CDC enabled tables

2022-06-07 Thread Yifan Cai (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551212#comment-17551212
 ] 

Yifan Cai commented on CASSANDRA-17666:
---

Starting commit

CI Results:
||Branch||Source||Circle CI||
|trunk|[branch|https://github.com/yifan-c/cassandra/tree/commit_remote_branch/CASSANDRA-17666-trunk-C82C970A-1A41-43E9-8A1C-72EFD3B80A5F]|[build|https://app.circleci.com/pipelines/github/yifan-c/cassandra?branch=commit_remote_branch%2FCASSANDRA-17666-trunk-C82C970A-1A41-43E9-8A1C-72EFD3B80A5F]|

CI is all green.

> Option to disable write path during streaming for CDC enabled tables
> 
>
> Key: CASSANDRA-17666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17666
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Change Data Capture
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For the CDC-enabled tables, a special write path is employed during 
> streaming. The mutations streamed are written into commit log first. 
> There are scenarios that the commit logs can accumulate, which lead to 
> failure of streaming and blocking writes. 
> I'd like to propose adding a dynamic toggle to disable the special write path 
> for CDC during streaming. 
> Please note that the toggle is a trade-off. Because the special write path is 
> there in the hope to ensure data consistency. Turning it off allows the 
> streaming to pass, but in some extreme scenarios, the downstream CDC 
> consumers may have holes in the stream, depending on how they consumes the 
> commit logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17666) Option to disable write path during streaming for CDC enabled tables

2022-06-07 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRA-17666:
--
Status: Review In Progress  (was: Patch Available)

> Option to disable write path during streaming for CDC enabled tables
> 
>
> Key: CASSANDRA-17666
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17666
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Change Data Capture
>Reporter: Yifan Cai
>Assignee: Yifan Cai
>Priority: Normal
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> For the CDC-enabled tables, a special write path is employed during 
> streaming. The mutations streamed are written into commit log first. 
> There are scenarios that the commit logs can accumulate, which lead to 
> failure of streaming and blocking writes. 
> I'd like to propose adding a dynamic toggle to disable the special write path 
> for CDC during streaming. 
> Please note that the toggle is a trade-off. Because the special write path is 
> there in the hope to ensure data consistency. Turning it off allows the 
> streaming to pass, but in some extreme scenarios, the downstream CDC 
> consumers may have holes in the stream, depending on how they consumes the 
> commit logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra-website] branch asf-staging updated (683cbbe9b -> 6ba5f89b8)

2022-06-07 Thread git-site-role
This is an automated email from the ASF dual-hosted git repository.

git-site-role pushed a change to branch asf-staging
in repository https://gitbox.apache.org/repos/asf/cassandra-website.git


 discard 683cbbe9b generate docs for 700ff74c
 new 6ba5f89b8 generate docs for 700ff74c

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (683cbbe9b)
\
 N -- N -- N   refs/heads/asf-staging (6ba5f89b8)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/search-index.js |   2 +-
 site-ui/build/ui-bundle.zip | Bin 4740078 -> 4740078 bytes
 2 files changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17145) utests_system_keyspace_directory is not run in Jenkins

2022-06-07 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-17145:
--
Summary: utests_system_keyspace_directory is not run in Jenkins  (was: 
utests_system_keyspace_directory and not run in Jenkins)

> utests_system_keyspace_directory is not run in Jenkins
> --
>
> Key: CASSANDRA-17145
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17145
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CI
>Reporter: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 4.0.x, 4.x
>
>
> It seems to me utests_system_keyspace_directory which we run with 4.0 and 
> trunk in CircleCI do not run in Jenkins as they all fail in trunk now in 
> CircleCI and nothing in Jenkins. The issue is not environmental and I don't 
> see that group of tests under Stage View in Jenkins so they are probably not 
> added there? We need to investigate that further.
> CC [~mck]  and [~blerer] if they know anything about those - like work in 
> progress to be added or a reason not to be added.
> FYI [~bereng] 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551207#comment-17551207
 ] 

Brandon Williams commented on CASSANDRA-16844:
--

bq. Found this patch and saw a tool using awk positional parsing, so would break

We should have really fleshed out CASSANDRA-5977 to the rest of nodetool so we 
wouldn't need to worry about this and neither would tool authors, but I guess 
virtual tables will eventually solve it.

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread David Capwell (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551204#comment-17551204
 ] 

David Capwell commented on CASSANDRA-16844:
---

bq.  I think that probably the only way to guarantee compatibility with 3rd 
party tools parsing text output is not making changes at all.

A work around may be to hide this behind a flag, so you do `nodetool 
compactionstats --show-sstables` and you get this output, else you get the 
previous output?  

bq.  I guess some 3rd party tools out there could also find issues if they find 
an unexpected column at the end depending on what they expect when parsing

Found this patch and saw a tool using awk positional parsing, so would break


> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551180#comment-17551180
 ] 

Stefan Miklosovic commented on CASSANDRA-17584:
---

[~adelapena] yes, just pushed. I ll provide circles soon.

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16844) Add number of sstables in a compaction to compactionstats output

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551174#comment-17551174
 ] 

Andres de la Peña commented on CASSANDRA-16844:
---

[~dcapwell] the reason to add the column in the middle is on the first comment 
of this ticket, it's an attempt on improving readability for humans.

That comment is followed by some discussion about compatibility. When this was 
committed it wasn't clear whether this was going to a minor or a major. Now 
that we know it's not in a major, I'm not sure whether we are breaking 
compatibility or not when changing nodetool's parseable output, even if we add 
the new column at the end. I guess some 3rd party tools out there could also 
find issues if they find an unexpected column at the end depending on what they 
expect when parsing, while some other implementations could support both new 
columns and changes in their order if they use the column headers. I think that 
probably the only way to guarantee compatibility with 3rd party tools parsing 
text output is not making changes at all.

> Add number of sstables in a compaction to compactionstats output
> 
>
> Key: CASSANDRA-16844
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16844
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1, 4.1-alpha1
>
>
> It would be helpful to know at a glance how many sstables are involved in any 
> running compactions. While this information can certainly be collected now, a 
> user has to grab it from the debug logs. I think it would be helpful for some 
> use cases to have this information straight from {{nodetool compactionstats}} 
> and then if the actual sstables involved in the compactions are desired, dive 
> into the debug.log for that. I think it would also be good to have this 
> information in the output of {{nodetool compactionhistory}}.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17679) Make resumable bootstrap feature optional

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551167#comment-17551167
 ] 

Brandon Williams commented on CASSANDRA-17679:
--

Yeah, and if you aren't sure if there's a problem but want to test that makes 
it a bit easier.

> Make resumable bootstrap feature optional
> -
>
> Key: CASSANDRA-17679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17679
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Streaming
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>
> From the patch I'm working on:
> {code}
> # In certain environments, operators may want to disable resumable bootstrap 
> in order to avoid potential correctness
> # violations or data loss scenarios. Largely this centers around nodes going 
> down during bootstrap, tombstones being
> # written, and potential races with repair. By default we leave this on as 
> it's been enabled for quite some time,
> # however the option to disable it is more palatable now that we have zero 
> copy streaming as that greatly accelerates
> # bootstraps. This defaults to true.
> # resumable_bootstrap_enabled: true
> {code}
> Not really a great fit for guardrails as it's less a "feature to be toggled 
> on and off" and more a subset of a specific feature that in certain 
> circumstances can lead to issues.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17679) Make resumable bootstrap feature optional

2022-06-07 Thread Josh McKenzie (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551155#comment-17551155
 ] 

Josh McKenzie commented on CASSANDRA-17679:
---

You're thinking a -D on top of the cassandra.yaml entry? Suppose I can see that 
for spot operational workarounds.

> Make resumable bootstrap feature optional
> -
>
> Key: CASSANDRA-17679
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17679
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Consistency/Streaming
>Reporter: Josh McKenzie
>Assignee: Josh McKenzie
>Priority: Normal
> Fix For: 4.x
>
>
> From the patch I'm working on:
> {code}
> # In certain environments, operators may want to disable resumable bootstrap 
> in order to avoid potential correctness
> # violations or data loss scenarios. Largely this centers around nodes going 
> down during bootstrap, tombstones being
> # written, and potential races with repair. By default we leave this on as 
> it's been enabled for quite some time,
> # however the option to disable it is more palatable now that we have zero 
> copy streaming as that greatly accelerates
> # bootstraps. This defaults to true.
> # resumable_bootstrap_enabled: true
> {code}
> Not really a great fit for guardrails as it's less a "feature to be toggled 
> on and off" and more a subset of a specific feature that in certain 
> circumstances can lead to issues.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17625) Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from Cassandra dtests)

2022-06-07 Thread Josh McKenzie (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551146#comment-17551146
 ] 

Josh McKenzie commented on CASSANDRA-17625:
---

Formally +1'ing this after barging in here like that is the least I can do.

+1 to taking timeout from 30-60. Patch looks good.

> Test Failure: 
> dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from 
> Cassandra dtests)
> ---
>
> Key: CASSANDRA-17625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17625
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>
> Flaked a couple times on 4.1
> {code}
> Error Message
> cassandra.DriverException: Keyspace metadata was not refreshed. See log for 
> details.
> {code}
> https://ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/
> Nightlies archive if above dropped: 
> https://nightlies.apache.org/cassandra/ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17625) Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from Cassandra dtests)

2022-06-07 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-17625:
--
Status: Ready to Commit  (was: Review In Progress)

> Test Failure: 
> dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from 
> Cassandra dtests)
> ---
>
> Key: CASSANDRA-17625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17625
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>
> Flaked a couple times on 4.1
> {code}
> Error Message
> cassandra.DriverException: Keyspace metadata was not refreshed. See log for 
> details.
> {code}
> https://ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/
> Nightlies archive if above dropped: 
> https://nightlies.apache.org/cassandra/ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551148#comment-17551148
 ] 

Andres de la Peña commented on CASSANDRA-17584:
---

[~smiklosovic] just answered that. I see that the other conversations are 
marked as resolved, but I don't see any changes nor comments about those nits. 
Maybe you forgot to push the changes?

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17625) Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from Cassandra dtests)

2022-06-07 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-17625:
--
Reviewers: Josh McKenzie, Josh McKenzie  (was: Josh McKenzie)
   Josh McKenzie, Josh McKenzie  (was: Josh McKenzie)
   Status: Review In Progress  (was: Patch Available)

> Test Failure: 
> dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from 
> Cassandra dtests)
> ---
>
> Key: CASSANDRA-17625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17625
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>
> Flaked a couple times on 4.1
> {code}
> Error Message
> cassandra.DriverException: Keyspace metadata was not refreshed. See log for 
> details.
> {code}
> https://ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/
> Nightlies archive if above dropped: 
> https://nightlies.apache.org/cassandra/ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17625) Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from Cassandra dtests)

2022-06-07 Thread Josh McKenzie (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh McKenzie updated CASSANDRA-17625:
--
Reviewers: Josh McKenzie

> Test Failure: 
> dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from 
> Cassandra dtests)
> ---
>
> Key: CASSANDRA-17625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17625
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>
> Flaked a couple times on 4.1
> {code}
> Error Message
> cassandra.DriverException: Keyspace metadata was not refreshed. See log for 
> details.
> {code}
> https://ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/
> Nightlies archive if above dropped: 
> https://nightlies.apache.org/cassandra/ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17669) CentOS/RHEL installation requires JRE not available in Java 11

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551116#comment-17551116
 ] 

Brandon Williams commented on CASSANDRA-17669:
--

Yes, I tested it on almalinux 8.

> CentOS/RHEL installation requires JRE not available in Java 11
> --
>
> Key: CASSANDRA-17669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Erick Ramirez
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.1.x
>
>
> h2. Background
> A user [reported on Stack 
> Overflow|https://stackoverflow.com/questions/72377621/] and the DataStax 
> Developers [dtsx.io/discord|https://dtsx.io/discord] an issue with installing 
> Cassandra when only Java 11 is installed.
> h2. Symptoms
> Attempts to install Cassandra using YUM requires Java 8:
> {noformat}
> $ sudo yum install cassandra
> Dependencies resolved.
> 
>  Package  Architecture
> Version  Repository  
> Size
> 
> Installing:
>  cassandranoarch  
> 4.0.4-1  cassandra   
> 45 M
> Installing dependencies:
>  java-1.8.0-openjdk   x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream  
> 341 k
>  java-1.8.0-openjdk-headless  x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream   
> 34 M
> Installing weak dependencies:
>  gtk2 x86_64  
> 2.24.32-5.el8appstream  
> 3.4 M
> Transaction Summary
> 
> Install  4 Packages
> {noformat}
> Similarly, attempts to install the RPM results in:
> {noformat}
> $ sudo rpm -i cassandra-4.0.4-1.noarch.rpm 
> warning: cassandra-4.0.4-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 
> 7e3e87cb: NOKEY
> error: Failed dependencies:
>   jre >= 1.8.0 is needed by cassandra-4.0.4-1.noarch{noformat}
> h2. Root cause
> Package installs on CentOS and RHEL platforms has [a dependency on JRE 
> 1.8+|https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L49]:
> {noformat}
> Requires:  jre >= 1.8.0{noformat}
> However, JRE is no longer available in Java 11. From the [JDK 11 release 
> notes|https://www.oracle.com/java/technologies/javase/11-relnote-issues.html]:
> {quote}In this release, the JRE or Server JRE is no longer offered. Only the 
> JDK is offered.
> {quote}
> h2. Workaround
> Override the dependency check when installing the RPM with the {{--nodeps}} 
> option:
> {noformat}
> $ sudo rpm --nodeps -i cassandra-4.0.4-1.noarch.rpm {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17458) Test Failure: org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551114#comment-17551114
 ] 

Andres de la Peña commented on CASSANDRA-17458:
---

Created CASSANDRA-17685 and CASSANDRA-17686 for those last test failures.

> Test Failure: 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie
> ---
>
> Key: CASSANDRA-17458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17458
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Andres de la Peña
>Assignee: Sathyanarayanan Saravanamuthu
>Priority: Normal
>  Labels: patch-available
> Fix For: 4.1-alpha, 4.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Intermittent failure on 
> {{org.apache.cassandra.db.SinglePartitionSliceCommandTest#testPartitionDeletionRangeDeletionTie}}
>  for trunk:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1024/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1018/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> {code:java}
> Failed 1 times in the last 11 runs. Flakiness: 10%, Stability: 90%
> Error Message
> Expected [Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11]] but got [Marker 
> INCL_START_BOUND(1, 1)@10/1647704834, Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 
> ts=11], Marker INCL_END_BOUND(1, 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> 
> but was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, 
> [[v=1 ts=11]], 
> org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
> Stacktrace
> junit.framework.AssertionFailedError: Expected [Row[info=[ts=11] ]: c1=1, 
> c2=1 | [v=1 ts=11]] but got [Marker INCL_START_BOUND(1, 1)@10/1647704834, 
> Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11], Marker INCL_END_BOUND(1, 
> 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> but 
> was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, [[v=1 
> ts=11]], org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
>   at 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie(SinglePartitionSliceCommandTest.java:463)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Standard Output
> INFO  [main] 2022-03-19 15:51:43,646 YamlConfigurationLoader.java:103 - 
> Configuration location: 
> file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2022-03-19 15:51:43,653 YamlConfigurationLoader.java:124 - 
> Loading settings from file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2022-03-19 15:51:43,971 Config.java:1119 - Node 
> configuration:[allocate_tokens_for_keyspace=null; 
> allocate_tokens_for_local_replication_factor=null; 
> allow_extra_insecure_udfs=false; all
> ...[truncated 192995 chars]...
> ome/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-37-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-39-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-38-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-40-big-Data.db:level=0,
>  ]
> {code}
> Failures can also be hit with CircleCI test multiplexer:
> [https://app.circleci.com/pipelines/github/adelapena/cassandra/1387/workflows/0f37a726-1dc2-4584-86f9-e99ecc40f551]
> CircleCI results show failures on three separate assertions, with a ~3% 
> flakiness.
> The same test looks ok in 4.0, as suggested by Butler and [this repeated 
> Circle 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1388/workflows/6b69d654-3d19-4f2a-aeb9-dc405c6ddd2b].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17686) Test failure: repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview

2022-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17686:
--
Fix Version/s: 4.x

> Test failure: 
> repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview
> --
>
> Key: CASSANDRA-17686
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17686
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Andres de la Peña
>Priority: Normal
> Fix For: 4.x
>
>
> The Python dtest 
> {{repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview}} is 
> flaky at least in {{trunk}}, with a flakiness < 1%.
> I haven't seen the failure on Jenkins but on [this CircleCI 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1662/workflows/b5753cdb-2d08-44d0-9caf-79b5fd0b01f4]
>  for CASSANDRA-17458.
> The failure can also be [reproduced in the 
> multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1667/workflows/60ba0ade-7e4e-4728-a7ff-3872f2a1903c],
>  with 7 failures in 5000 iterations:
> {code}
> test teardown failure
> Unexpected error found in node logs (see stdout for full details). Errors: 
> [[node3] 'ERROR [CompactionExecutor:3] 2022-06-06 10:58:08,720 
> JVMStabilityInspector.java:68 - Exception in thread 
> Thread[CompactionExecutor:3,5,CompactionExecutor]\njava.util.ConcurrentModificationException:
>  null\n\tat java.util.HashMap$HashIterator.nextNode(HashMap.java:1469)\n\tat 
> java.util.HashMap$KeyIterator.next(HashMap.java:1493)\n\tat 
> java.util.AbstractCollection.toArray(AbstractCollection.java:141)\n\tat 
> com.google.common.collect.ImmutableSet.copyOf(ImmutableSet.java:211)\n\tat 
> org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getSSTables(SizeTieredCompactionStrategy.java:341)\n\tat
>  
> org.apache.cassandra.db.compaction.PendingRepairManager.getRepairFinishedCompactionTask(PendingRepairManager.java:271)\n\tat
>  
> org.apache.cassandra.db.compaction.PendingRepairManager.getNextRepairFinishedTask(PendingRepairManager.java:359)\n\tat
>  
> org.apache.cassandra.db.compaction.AbstractStrategyHolder$TaskSupplier.getTask(AbstractStrategyHolder.java:65)\n\tat
>  
> org.apache.cassandra.db.compaction.PendingRepairHolder.getNextRepairFinishedTask(PendingRepairHolder.java:159)\n\tat
>  
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getNextBackgroundTask(CompactionStrategyManager.java:200)\n\tat
>  
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:291)\n\tat
>  org.apache.cassandra.concurrent.FutureTask$2.call(FutureTask.java:98)\n\tat 
> org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:47)\n\tat 
> org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:57)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat
>  java.lang.Thread.run(Thread.java:748)']
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17686) Test failure: repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview

2022-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17686:
--
Bug Category: Parent values: Correctness(12982)Level 1 values: Test 
Failure(12990)

> Test failure: 
> repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview
> --
>
> Key: CASSANDRA-17686
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17686
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Andres de la Peña
>Priority: Normal
> Fix For: 4.x
>
>
> The Python dtest 
> {{repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview}} is 
> flaky at least in {{trunk}}, with a flakiness < 1%.
> I haven't seen the failure on Jenkins but on [this CircleCI 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1662/workflows/b5753cdb-2d08-44d0-9caf-79b5fd0b01f4]
>  for CASSANDRA-17458.
> The failure can also be [reproduced in the 
> multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1667/workflows/60ba0ade-7e4e-4728-a7ff-3872f2a1903c],
>  with 7 failures in 5000 iterations:
> {code}
> test teardown failure
> Unexpected error found in node logs (see stdout for full details). Errors: 
> [[node3] 'ERROR [CompactionExecutor:3] 2022-06-06 10:58:08,720 
> JVMStabilityInspector.java:68 - Exception in thread 
> Thread[CompactionExecutor:3,5,CompactionExecutor]\njava.util.ConcurrentModificationException:
>  null\n\tat java.util.HashMap$HashIterator.nextNode(HashMap.java:1469)\n\tat 
> java.util.HashMap$KeyIterator.next(HashMap.java:1493)\n\tat 
> java.util.AbstractCollection.toArray(AbstractCollection.java:141)\n\tat 
> com.google.common.collect.ImmutableSet.copyOf(ImmutableSet.java:211)\n\tat 
> org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getSSTables(SizeTieredCompactionStrategy.java:341)\n\tat
>  
> org.apache.cassandra.db.compaction.PendingRepairManager.getRepairFinishedCompactionTask(PendingRepairManager.java:271)\n\tat
>  
> org.apache.cassandra.db.compaction.PendingRepairManager.getNextRepairFinishedTask(PendingRepairManager.java:359)\n\tat
>  
> org.apache.cassandra.db.compaction.AbstractStrategyHolder$TaskSupplier.getTask(AbstractStrategyHolder.java:65)\n\tat
>  
> org.apache.cassandra.db.compaction.PendingRepairHolder.getNextRepairFinishedTask(PendingRepairHolder.java:159)\n\tat
>  
> org.apache.cassandra.db.compaction.CompactionStrategyManager.getNextBackgroundTask(CompactionStrategyManager.java:200)\n\tat
>  
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:291)\n\tat
>  org.apache.cassandra.concurrent.FutureTask$2.call(FutureTask.java:98)\n\tat 
> org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:47)\n\tat 
> org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:57)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat
>  java.lang.Thread.run(Thread.java:748)']
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17686) Test failure: repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview

2022-06-07 Thread Jira
Andres de la Peña created CASSANDRA-17686:
-

 Summary: Test failure: 
repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview
 Key: CASSANDRA-17686
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17686
 Project: Cassandra
  Issue Type: Bug
  Components: Test/dtest/python
Reporter: Andres de la Peña


The Python dtest 
{{repair_tests/preview_repair_test.py::TestPreviewRepair::test_preview}} is 
flaky at least in {{trunk}}, with a flakiness < 1%.

I haven't seen the failure on Jenkins but on [this CircleCI 
run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1662/workflows/b5753cdb-2d08-44d0-9caf-79b5fd0b01f4]
 for CASSANDRA-17458.

The failure can also be [reproduced in the 
multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1667/workflows/60ba0ade-7e4e-4728-a7ff-3872f2a1903c],
 with 7 failures in 5000 iterations:

{code}
test teardown failure
Unexpected error found in node logs (see stdout for full details). Errors: 
[[node3] 'ERROR [CompactionExecutor:3] 2022-06-06 10:58:08,720 
JVMStabilityInspector.java:68 - Exception in thread 
Thread[CompactionExecutor:3,5,CompactionExecutor]\njava.util.ConcurrentModificationException:
 null\n\tat java.util.HashMap$HashIterator.nextNode(HashMap.java:1469)\n\tat 
java.util.HashMap$KeyIterator.next(HashMap.java:1493)\n\tat 
java.util.AbstractCollection.toArray(AbstractCollection.java:141)\n\tat 
com.google.common.collect.ImmutableSet.copyOf(ImmutableSet.java:211)\n\tat 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getSSTables(SizeTieredCompactionStrategy.java:341)\n\tat
 
org.apache.cassandra.db.compaction.PendingRepairManager.getRepairFinishedCompactionTask(PendingRepairManager.java:271)\n\tat
 
org.apache.cassandra.db.compaction.PendingRepairManager.getNextRepairFinishedTask(PendingRepairManager.java:359)\n\tat
 
org.apache.cassandra.db.compaction.AbstractStrategyHolder$TaskSupplier.getTask(AbstractStrategyHolder.java:65)\n\tat
 
org.apache.cassandra.db.compaction.PendingRepairHolder.getNextRepairFinishedTask(PendingRepairHolder.java:159)\n\tat
 
org.apache.cassandra.db.compaction.CompactionStrategyManager.getNextBackgroundTask(CompactionStrategyManager.java:200)\n\tat
 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:291)\n\tat
 org.apache.cassandra.concurrent.FutureTask$2.call(FutureTask.java:98)\n\tat 
org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:47)\n\tat 
org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:57)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat
 java.lang.Thread.run(Thread.java:748)']
{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17685) Test failure: transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair

2022-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17685:
--
Bug Category: Parent values: Correctness(12982)Level 1 values: Test 
Failure(12990)

> Test failure: 
> transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair
> ---
>
> Key: CASSANDRA-17685
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17685
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Andres de la Peña
>Priority: Normal
> Fix For: 4.1-beta
>
>
> The Python dtest 
> {{transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair}}
>  is flaky at least in {{{}cassandra-4.1{}}}, with a flakiness < 1%.
> I haven't seen the failure on Jenkins but on [this CircleCI 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1663/workflows/c63703e3-8c7a-42c6-981a-53cb59babe1f/jobs/17476]
>  for CASSANDRA-17458.
> The failure can also be [reproduced in the 
> multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1666/workflows/6f925be1-c0df-4b2a-83e0-4612a46f32bd/jobs/17516],
>  with 5 failures in 5000 iterations:
> {code:java}
> self = 
>  object at 0x7f87951c77b8>
> @pytest.mark.no_vnodes
> def test_optimized_primary_range_repair(self):
> """ optimized primary range incremental repair from full replica 
> should remove data on node3 """
> self._test_speculative_write_repair_cycle(primary_range=True,
>   optimized_repair=True,
>   
> repair_coordinator=self.node1,
> > expect_node3_data=False)
> transient_replication_test.py:523: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> transient_replication_test.py:473: in _test_speculative_write_repair_cycle
> with tm(self.node1) as tm1, tm(self.node2) as tm2, tm(self.node3) as tm3:
> transient_replication_test.py:62: in __enter__
> self.start()
> transient_replication_test.py:55: in start
> self.jmx.start()
> tools/jmxutils.py:187: in start
> subprocess.check_output(args, stderr=subprocess.STDOUT)
> /usr/lib/python3.6/subprocess.py:356: in check_output
> **kwargs).stdout
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> input = None, timeout = None, check = True
> popenargs = (('/usr/lib/jvm/java-8-openjdk-amd64/bin/java', '-cp', 
> '/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar:/home/cassandr...t/tools/../lib/jolokia-jvm-1.6.2-agent.jar',
>  'org.jolokia.jvmagent.client.AgentLauncher', '--host', '127.0.0.1', ...),)
> kwargs = {'stderr': -2, 'stdout': -1}
> process = 
> stdout = b"Couldn't start agent for PID 11637\nPossible reason could be that 
> port '8778' is already occupied.\nPlease check the standard output of the 
> target process for a detailed error message.\n"
> stderr = None, retcode = 1
> def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
> """Run command with arguments and return a CompletedProcess instance.
> 
> The returned instance will have attributes args, returncode, stdout 
> and
> stderr. By default, stdout and stderr are not captured, and those 
> attributes
> will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture 
> them.
> 
> If check is True and the exit code was non-zero, it raises a
> CalledProcessError. The CalledProcessError object will have the 
> return code
> in the returncode attribute, and output & stderr attributes if those 
> streams
> were captured.
> 
> If timeout is given, and the process takes too long, a TimeoutExpired
> exception will be raised.
> 
> There is an optional argument "input", allowing you to
> pass a string to the subprocess's stdin.  If you use this argument
> you may not also use the Popen constructor's "stdin" argument, as
> it will be used internally.
> 
> The other arguments are the same as for the Popen constructor.
> 
> If universal_newlines=True is passed, the "input" argument must be a
> string and stdout/stderr in the returned object will be strings 
> rather than
> bytes.
> """
> if input is not None:
> if 'stdin' in kwargs:
> raise ValueError('stdin and input arguments may not both be 
> used.')
> kwargs['stdin'] = PIPE
> 
> with 

[jira] [Updated] (CASSANDRA-17685) Test failure: transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair

2022-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17685:
--
Fix Version/s: 4.1-beta

> Test failure: 
> transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair
> ---
>
> Key: CASSANDRA-17685
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17685
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Andres de la Peña
>Priority: Normal
> Fix For: 4.1-beta
>
>
> The Python dtest 
> {{transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair}}
>  is flaky at least in {{{}cassandra-4.1{}}}, with a flakiness < 1%.
> I haven't seen the failure on Jenkins but on [this CircleCI 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1663/workflows/c63703e3-8c7a-42c6-981a-53cb59babe1f/jobs/17476]
>  for CASSANDRA-17458.
> The failure can also be [reproduced in the 
> multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1666/workflows/6f925be1-c0df-4b2a-83e0-4612a46f32bd/jobs/17516],
>  with 5 failures in 5000 iterations:
> {code:java}
> self = 
>  object at 0x7f87951c77b8>
> @pytest.mark.no_vnodes
> def test_optimized_primary_range_repair(self):
> """ optimized primary range incremental repair from full replica 
> should remove data on node3 """
> self._test_speculative_write_repair_cycle(primary_range=True,
>   optimized_repair=True,
>   
> repair_coordinator=self.node1,
> > expect_node3_data=False)
> transient_replication_test.py:523: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> transient_replication_test.py:473: in _test_speculative_write_repair_cycle
> with tm(self.node1) as tm1, tm(self.node2) as tm2, tm(self.node3) as tm3:
> transient_replication_test.py:62: in __enter__
> self.start()
> transient_replication_test.py:55: in start
> self.jmx.start()
> tools/jmxutils.py:187: in start
> subprocess.check_output(args, stderr=subprocess.STDOUT)
> /usr/lib/python3.6/subprocess.py:356: in check_output
> **kwargs).stdout
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> input = None, timeout = None, check = True
> popenargs = (('/usr/lib/jvm/java-8-openjdk-amd64/bin/java', '-cp', 
> '/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar:/home/cassandr...t/tools/../lib/jolokia-jvm-1.6.2-agent.jar',
>  'org.jolokia.jvmagent.client.AgentLauncher', '--host', '127.0.0.1', ...),)
> kwargs = {'stderr': -2, 'stdout': -1}
> process = 
> stdout = b"Couldn't start agent for PID 11637\nPossible reason could be that 
> port '8778' is already occupied.\nPlease check the standard output of the 
> target process for a detailed error message.\n"
> stderr = None, retcode = 1
> def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
> """Run command with arguments and return a CompletedProcess instance.
> 
> The returned instance will have attributes args, returncode, stdout 
> and
> stderr. By default, stdout and stderr are not captured, and those 
> attributes
> will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture 
> them.
> 
> If check is True and the exit code was non-zero, it raises a
> CalledProcessError. The CalledProcessError object will have the 
> return code
> in the returncode attribute, and output & stderr attributes if those 
> streams
> were captured.
> 
> If timeout is given, and the process takes too long, a TimeoutExpired
> exception will be raised.
> 
> There is an optional argument "input", allowing you to
> pass a string to the subprocess's stdin.  If you use this argument
> you may not also use the Popen constructor's "stdin" argument, as
> it will be used internally.
> 
> The other arguments are the same as for the Popen constructor.
> 
> If universal_newlines=True is passed, the "input" argument must be a
> string and stdout/stderr in the returned object will be strings 
> rather than
> bytes.
> """
> if input is not None:
> if 'stdin' in kwargs:
> raise ValueError('stdin and input arguments may not both be 
> used.')
> kwargs['stdin'] = PIPE
> 
> with Popen(*popenargs, **kwargs) as process:
> try:
> 

[jira] [Created] (CASSANDRA-17685) Test failure: transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair

2022-06-07 Thread Jira
Andres de la Peña created CASSANDRA-17685:
-

 Summary: Test failure: 
transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair
 Key: CASSANDRA-17685
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17685
 Project: Cassandra
  Issue Type: Bug
  Components: Test/dtest/python
Reporter: Andres de la Peña


The Python dtest 
{{transient_replication_test.py::TestTransientReplicationRepairStreamEntireSSTable::test_optimized_primary_range_repair}}
 is flaky at least in {{{}cassandra-4.1{}}}, with a flakiness < 1%.

I haven't seen the failure on Jenkins but on [this CircleCI 
run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1663/workflows/c63703e3-8c7a-42c6-981a-53cb59babe1f/jobs/17476]
 for CASSANDRA-17458.

The failure can also be [reproduced in the 
multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1666/workflows/6f925be1-c0df-4b2a-83e0-4612a46f32bd/jobs/17516],
 with 5 failures in 5000 iterations:
{code:java}
self = 


@pytest.mark.no_vnodes
def test_optimized_primary_range_repair(self):
""" optimized primary range incremental repair from full replica should 
remove data on node3 """
self._test_speculative_write_repair_cycle(primary_range=True,
  optimized_repair=True,
  repair_coordinator=self.node1,
> expect_node3_data=False)

transient_replication_test.py:523: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
transient_replication_test.py:473: in _test_speculative_write_repair_cycle
with tm(self.node1) as tm1, tm(self.node2) as tm2, tm(self.node3) as tm3:
transient_replication_test.py:62: in __enter__
self.start()
transient_replication_test.py:55: in start
self.jmx.start()
tools/jmxutils.py:187: in start
subprocess.check_output(args, stderr=subprocess.STDOUT)
/usr/lib/python3.6/subprocess.py:356: in check_output
**kwargs).stdout
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

input = None, timeout = None, check = True
popenargs = (('/usr/lib/jvm/java-8-openjdk-amd64/bin/java', '-cp', 
'/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar:/home/cassandr...t/tools/../lib/jolokia-jvm-1.6.2-agent.jar',
 'org.jolokia.jvmagent.client.AgentLauncher', '--host', '127.0.0.1', ...),)
kwargs = {'stderr': -2, 'stdout': -1}
process = 
stdout = b"Couldn't start agent for PID 11637\nPossible reason could be that 
port '8778' is already occupied.\nPlease check the standard output of the 
target process for a detailed error message.\n"
stderr = None, retcode = 1

def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
"""Run command with arguments and return a CompletedProcess instance.

The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those 
attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture 
them.

If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return 
code
in the returncode attribute, and output & stderr attributes if those 
streams
were captured.

If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.

There is an optional argument "input", allowing you to
pass a string to the subprocess's stdin.  If you use this argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.

The other arguments are the same as for the Popen constructor.

If universal_newlines=True is passed, the "input" argument must be a
string and stdout/stderr in the returned object will be strings rather 
than
bytes.
"""
if input is not None:
if 'stdin' in kwargs:
raise ValueError('stdin and input arguments may not both be 
used.')
kwargs['stdin'] = PIPE

with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired:
process.kill()
stdout, stderr = process.communicate()
raise TimeoutExpired(process.args, timeout, output=stdout,
 stderr=stderr)
except:
process.kill()
process.wait()
raise
retcode = process.poll()
if check and retcode:
raise CalledProcessError(retcode, 

[jira] [Comment Edited] (CASSANDRA-14715) Read repairs can result in bogus timeout errors to the client

2022-06-07 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551078#comment-17551078
 ] 

Stefan Miklosovic edited comment on CASSANDRA-14715 at 6/7/22 2:06 PM:
---

The proposed approach seems good to me but I had hard time to test this 
consistently. I tried the steps in the description of this ticket and sometimes 
I got the same response, sometimes I did not. Thinking about testing, I am not 
sure what test to write here. The options we have:

1) jvm dtest - this approach would be about setuping 2x3 cluster, inserting 
data, shutting down the node, removing the sstables of this node and starting 
it again, executing the query and watching its logs. I think that the step 
"removing of sstables" is not necessary because, I think, data dir of that node 
is automatically removed on the shutdown. I am not sure about the details and 
viability of this test approach yet.

2) same as 1 but it would be done in python dtests

3) Testing only RepairMergeListener and its close method. This would be very 
nice to do but what I noticed is that all inner classes in DataResolver 
(RepairMergeListener is inner class of DataResolver) are not static and they 
are all private. I can not just easilly test this class in isolation. I would 
need to rewrite it all to be static classes and so and this might have 
not-so-obvious consequences yet.

What I found interesting while I was testing this is that when I turned the 
node off, removed data, turned it on and listed the data dir of respective 
table, that SSTable was there again. How is this possible? Is not it like 
commit logs were flushed on the startup or something like that? I think we 
would need to remove commit logs too.


was (Author: smiklosovic):
The proposed approch seems good to me but I had hard time to test this 
consistently. I tried the steps in the description of this ticket and sometimes 
I got the same response, sometimes I did not. Thinking about testing, I am not 
sure what test to write here. The options we have:

1) jvm dtest - this approach would be about setuping 2x3 cluster, inserting 
data, shutting down the node, removing the sstables of this node and starting 
it again, executing the query and watching its logs. I think that the step 
"removing of sstables" is not necessary because, I think, data dir of that node 
is automatically remove on the shutdown. I am not sure about the details and 
viability of this test approach yet.

2) same as 1 but it would be done in python dtests

3) Testing only RepairMergeListener and its close method. This would be very 
nice to do but what I noticed is that all inner classes in DataResolver 
(RepairMergeListener is inner class of DataResolver) are not static and they 
are all private. I can not just easilly test this class in isolation. I would 
need to rewrite it all to be static classes and so and this might have 
not-so-obvious consequences yet.

What I found interesting while I was testing this is that when I turned the 
node off, removed data, turned it on and listed the data dir of respective 
table, that SSTable was there again. How is this possible? Is not it like 
commit logs were flushed on the startup or something like that? I think we 
would need to remove commit logs too.

> Read repairs can result in bogus timeout errors to the client
> -
>
> Key: CASSANDRA-14715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14715
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Cameron Zemek
>Assignee: Stefan Miklosovic
>Priority: Low
>
> In RepairMergeListener:close() it does the following:
>  
> {code:java}
> try
> {
> FBUtilities.waitOnFutures(repairResults, 
> DatabaseDescriptor.getWriteRpcTimeout());
> }
> catch (TimeoutException ex)
> {
> // We got all responses, but timed out while repairing
> int blockFor = consistency.blockFor(keyspace);
> if (Tracing.isTracing())
> Tracing.trace("Timed out while read-repairing after receiving all {} 
> data and digest responses", blockFor);
> else
> logger.debug("Timeout while read-repairing after receiving all {} 
> data and digest responses", blockFor);
> throw new ReadTimeoutException(consistency, blockFor-1, blockFor, true);
> }
> {code}
> This propagates up and gets sent to the client and we have customers get 
> confused cause they see timeouts for CL ALL requiring ALL replicas even 
> though they have read_repair_chance = 0 and using a LOCAL_* CL.
> At minimum I suggest instead of using the consistency level of DataResolver 
> (which is always ALL with read repairs) for the timeout it instead use 
> repairResults.size(). That is blockFor = repairResults.size() . But saying 

[jira] [Commented] (CASSANDRA-17411) Network partition causes write ONE timeouts when using counters in Cassandra 4

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551102#comment-17551102
 ] 

Brandon Williams commented on CASSANDRA-17411:
--

I've simplified the reproduction in a dtest 
[here|https://github.com/driftx/cassandra-dtest/tree/CASSANDRA-17411] and 
updated the fix versions accordingly while I continue to investigate.

> Network partition causes write ONE timeouts when using counters in Cassandra 4
> --
>
> Key: CASSANDRA-17411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17411
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Pere Balaguer
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.x
>
> Attachments: app.py
>
>
> h5. Affected versions:
>  * 4.x
> h5. Observed behavior:
> When executing CL=ONE writes on a table with a counter column, if one of the 
> nodes is network partitioned from the others, clients keep sending requests 
> to it.
> Even though this may be a "driver" problem, I've been able to reproduce it 
> with both java and python datastax drivers using their latest available 
> versions and given the behavior only changes depending on the Cassandra 
> version, well, here I am.
> h5. Expected behavior:
> In Cassandra 3 after all inflight requests fail (expected), no new requests 
> are sent to the partitioned node. The expectation is that Cassandra 4 behaves 
> the same way.
> h5. How to reproduce:
> {noformat}
> # Create a cluster with the desired version, will go with 4.x for this example
> ccm create bug-report -v 4.0.3
> ccm populate -n 2:2:2
> ccm start
> # Create schemas and so on
> CQL=$(cat < CONSISTENCY ALL;
> DROP KEYSPACE IF EXISTS demo;
> CREATE KEYSPACE demo WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 2, 'dc2': 2, 'dc3': 2};
> CREATE TABLE demo.demo (pk uuid PRIMARY KEY, count counter) WITH compaction = 
> {'class': 'LeveledCompactionStrategy'};
> END
> )
> ccm node1 cqlsh --verbose --exec="${CQL}"
> # Launch the attached app.py
> # requires cassandra-driver
> python3 app.py "127.0.0.1" "9042"
> # Wait a bit for the app to settle, proceed to next step once you see 3 
> messages in stdout like:
> # 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 
> (0.00) timeouts/total_rqs in the last 1 minute
> # Partition one node with iptables
> iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 7000 -j 
> DROP; iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 
> 9042 -j DROP
> {noformat}
> Some time after executing the iptables command in cassandra-3 the output 
> should be similar to:
> {noformat}
> 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:51,576 - target-dc3 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:58,032 - target-dc1 - __main__ - INFO - Got 6/252 (2.380952) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,560 - target-dc2 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,620 - target-dc3 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:58,101 - target-dc1 - __main__ - INFO - Got 2/354 (0.564972) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,602 - target-dc2 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,672 - target-dc3 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:58,153 - target-dc1 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> {noformat}
> as the timeouts/rqs shows, in about 2 minutes the partitioned node stops 
> receiving traffic
> while as in cassandra-4
> {noformat}
> 2022-03-01 15:49:39,068 - target-dc3 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:39,107 - target-dc2 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:41,206 - target-dc1 - __main__ - INFO - Got 2/444 (0.450450) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,095 - target-dc3 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,148 - target-dc2 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:42,589 - target-dc1 - __main__ - INFO - Got 7/13 (53.846154) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,125 - target-dc3 - __main__ - INFO - Got 0/567 

[jira] [Updated] (CASSANDRA-17411) Network partition causes write ONE timeouts when using counters in Cassandra 4

2022-06-07 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-17411:
-
Fix Version/s: 4.1-beta
   4.x

> Network partition causes write ONE timeouts when using counters in Cassandra 4
> --
>
> Key: CASSANDRA-17411
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17411
> Project: Cassandra
>  Issue Type: Bug
>  Components: Consistency/Coordination
>Reporter: Pere Balaguer
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.x
>
> Attachments: app.py
>
>
> h5. Affected versions:
>  * 4.x
> h5. Observed behavior:
> When executing CL=ONE writes on a table with a counter column, if one of the 
> nodes is network partitioned from the others, clients keep sending requests 
> to it.
> Even though this may be a "driver" problem, I've been able to reproduce it 
> with both java and python datastax drivers using their latest available 
> versions and given the behavior only changes depending on the Cassandra 
> version, well, here I am.
> h5. Expected behavior:
> In Cassandra 3 after all inflight requests fail (expected), no new requests 
> are sent to the partitioned node. The expectation is that Cassandra 4 behaves 
> the same way.
> h5. How to reproduce:
> {noformat}
> # Create a cluster with the desired version, will go with 4.x for this example
> ccm create bug-report -v 4.0.3
> ccm populate -n 2:2:2
> ccm start
> # Create schemas and so on
> CQL=$(cat < CONSISTENCY ALL;
> DROP KEYSPACE IF EXISTS demo;
> CREATE KEYSPACE demo WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 
> 'dc1': 2, 'dc2': 2, 'dc3': 2};
> CREATE TABLE demo.demo (pk uuid PRIMARY KEY, count counter) WITH compaction = 
> {'class': 'LeveledCompactionStrategy'};
> END
> )
> ccm node1 cqlsh --verbose --exec="${CQL}"
> # Launch the attached app.py
> # requires cassandra-driver
> python3 app.py "127.0.0.1" "9042"
> # Wait a bit for the app to settle, proceed to next step once you see 3 
> messages in stdout like:
> # 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 
> (0.00) timeouts/total_rqs in the last 1 minute
> # Partition one node with iptables
> iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 7000 -j 
> DROP; iptables -A INPUT -p tcp --destination 127.0.0.1 --destination-port 
> 9042 -j DROP
> {noformat}
> Some time after executing the iptables command in cassandra-3 the output 
> should be similar to:
> {noformat}
> 2022-03-01 15:41:51,557 - target-dc2 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:51,576 - target-dc3 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:41:58,032 - target-dc1 - __main__ - INFO - Got 6/252 (2.380952) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,560 - target-dc2 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:51,620 - target-dc3 - __main__ - INFO - Got 0/570 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:42:58,101 - target-dc1 - __main__ - INFO - Got 2/354 (0.564972) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,602 - target-dc2 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:51,672 - target-dc3 - __main__ - INFO - Got 0/571 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:43:58,153 - target-dc1 - __main__ - INFO - Got 0/572 (0.00) 
> timeouts/total_rqs in the last 1 minute
> {noformat}
> as the timeouts/rqs shows, in about 2 minutes the partitioned node stops 
> receiving traffic
> while as in cassandra-4
> {noformat}
> 2022-03-01 15:49:39,068 - target-dc3 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:39,107 - target-dc2 - __main__ - INFO - Got 0/566 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:49:41,206 - target-dc1 - __main__ - INFO - Got 2/444 (0.450450) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,095 - target-dc3 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:39,148 - target-dc2 - __main__ - INFO - Got 0/569 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:50:42,589 - target-dc1 - __main__ - INFO - Got 7/13 (53.846154) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,125 - target-dc3 - __main__ - INFO - Got 0/567 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 2022-03-01 15:51:39,159 - target-dc2 - __main__ - INFO - Got 0/567 (0.00) 
> timeouts/total_rqs in the last 1 minute
> 

[jira] [Commented] (CASSANDRA-17669) CentOS/RHEL installation requires JRE not available in Java 11

2022-06-07 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551094#comment-17551094
 ] 

Berenguer Blasi commented on CASSANDRA-17669:
-

You're right. Did you get a chance to test and install locally? I am sorry I 
can't spin up VMs on my box atm to test it myself.

> CentOS/RHEL installation requires JRE not available in Java 11
> --
>
> Key: CASSANDRA-17669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Erick Ramirez
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.1.x
>
>
> h2. Background
> A user [reported on Stack 
> Overflow|https://stackoverflow.com/questions/72377621/] and the DataStax 
> Developers [dtsx.io/discord|https://dtsx.io/discord] an issue with installing 
> Cassandra when only Java 11 is installed.
> h2. Symptoms
> Attempts to install Cassandra using YUM requires Java 8:
> {noformat}
> $ sudo yum install cassandra
> Dependencies resolved.
> 
>  Package  Architecture
> Version  Repository  
> Size
> 
> Installing:
>  cassandranoarch  
> 4.0.4-1  cassandra   
> 45 M
> Installing dependencies:
>  java-1.8.0-openjdk   x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream  
> 341 k
>  java-1.8.0-openjdk-headless  x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream   
> 34 M
> Installing weak dependencies:
>  gtk2 x86_64  
> 2.24.32-5.el8appstream  
> 3.4 M
> Transaction Summary
> 
> Install  4 Packages
> {noformat}
> Similarly, attempts to install the RPM results in:
> {noformat}
> $ sudo rpm -i cassandra-4.0.4-1.noarch.rpm 
> warning: cassandra-4.0.4-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 
> 7e3e87cb: NOKEY
> error: Failed dependencies:
>   jre >= 1.8.0 is needed by cassandra-4.0.4-1.noarch{noformat}
> h2. Root cause
> Package installs on CentOS and RHEL platforms has [a dependency on JRE 
> 1.8+|https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L49]:
> {noformat}
> Requires:  jre >= 1.8.0{noformat}
> However, JRE is no longer available in Java 11. From the [JDK 11 release 
> notes|https://www.oracle.com/java/technologies/javase/11-relnote-issues.html]:
> {quote}In this release, the JRE or Server JRE is no longer offered. Only the 
> JDK is offered.
> {quote}
> h2. Workaround
> Override the dependency check when installing the RPM with the {{--nodeps}} 
> option:
> {noformat}
> $ sudo rpm --nodeps -i cassandra-4.0.4-1.noarch.rpm {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14715) Read repairs can result in bogus timeout errors to the client

2022-06-07 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17551078#comment-17551078
 ] 

Stefan Miklosovic commented on CASSANDRA-14715:
---

The proposed approch seems good to me but I had hard time to test this 
consistently. I tried the steps in the description of this ticket and sometimes 
I got the same response, sometimes I did not. Thinking about testing, I am not 
sure what test to write here. The options we have:

1) jvm dtest - this approach would be about setuping 2x3 cluster, inserting 
data, shutting down the node, removing the sstables of this node and starting 
it again, executing the query and watching its logs. I think that the step 
"removing of sstables" is not necessary because, I think, data dir of that node 
is automatically remove on the shutdown. I am not sure about the details and 
viability of this test approach yet.

2) same as 1 but it would be done in python dtests

3) Testing only RepairMergeListener and its close method. This would be very 
nice to do but what I noticed is that all inner classes in DataResolver 
(RepairMergeListener is inner class of DataResolver) are not static and they 
are all private. I can not just easilly test this class in isolation. I would 
need to rewrite it all to be static classes and so and this might have 
not-so-obvious consequences yet.

What I found interesting while I was testing this is that when I turned the 
node off, removed data, turned it on and listed the data dir of respective 
table, that SSTable was there again. How is this possible? Is not it like 
commit logs were flushed on the startup or something like that? I think we 
would need to remove commit logs too.

> Read repairs can result in bogus timeout errors to the client
> -
>
> Key: CASSANDRA-14715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14715
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Cameron Zemek
>Assignee: Stefan Miklosovic
>Priority: Low
>
> In RepairMergeListener:close() it does the following:
>  
> {code:java}
> try
> {
> FBUtilities.waitOnFutures(repairResults, 
> DatabaseDescriptor.getWriteRpcTimeout());
> }
> catch (TimeoutException ex)
> {
> // We got all responses, but timed out while repairing
> int blockFor = consistency.blockFor(keyspace);
> if (Tracing.isTracing())
> Tracing.trace("Timed out while read-repairing after receiving all {} 
> data and digest responses", blockFor);
> else
> logger.debug("Timeout while read-repairing after receiving all {} 
> data and digest responses", blockFor);
> throw new ReadTimeoutException(consistency, blockFor-1, blockFor, true);
> }
> {code}
> This propagates up and gets sent to the client and we have customers get 
> confused cause they see timeouts for CL ALL requiring ALL replicas even 
> though they have read_repair_chance = 0 and using a LOCAL_* CL.
> At minimum I suggest instead of using the consistency level of DataResolver 
> (which is always ALL with read repairs) for the timeout it instead use 
> repairResults.size(). That is blockFor = repairResults.size() . But saying it 
> received _blockFor - 1_ is bogus still. Fixing that would require more 
> changes. I was thinking maybe like so:
>  
> {code:java}
> public static void waitOnFutures(List results, long ms, 
> MutableInt counter) throws TimeoutException
> {
> for (AsyncOneResponse result : results)
> {
> result.get(ms, TimeUnit.MILLISECONDS);
> counter.increment();
> }
> }
> {code}
>  
>  
>  
> Likewise in SinglePartitionReadLifecycle:maybeAwaitFullDataRead() it says 
> _blockFor - 1_ for how many were received, which is also bogus.
>  
> Steps used to reproduce was modify RepairMergeListener:close() to always 
> throw timeout exception.  With schema:
> {noformat}
> CREATE KEYSPACE weather WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'}  AND durable_writes = true;
> CREATE TABLE weather.city (
> cityid int PRIMARY KEY,
> name text
> ) WITH bloom_filter_fp_chance = 0.01
> AND dclocal_read_repair_chance = 0.0
> AND read_repair_chance = 0.0
> AND speculative_retry = 'NONE';
> {noformat}
> Then using the following steps:
>  # ccm node1 cqlsh
>  # INSERT INTO weather.city(cityid, name) VALUES (1, 'Canberra');
>  # exit;
>  # ccm node1 flush
>  # ccm node1 stop
>  # rm -rf 
> ~/.ccm/test_repair/node1/data0/weather/city-ff2fade0b18d11e8b1cd097acbab1e3d/mc-1-big-*
>  # remove the sstable with the insert
>  # ccm node1 start
>  # ccm node1 cqlsh
>  # CONSISTENCY LOCAL_QUORUM;
>  # select * from weather.city where cityid = 1;
> You get result of:
> {noformat}
> ReadTimeout: Error from server: code=1200 [Coordinator node timed out 

[jira] [Commented] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550972#comment-17550972
 ] 

Stefan Miklosovic commented on CASSANDRA-17584:
---

Thanks [~adelapena] for the review, I would really appreciate if you answered 
this question (1), once resolved I think we can ship it.

(1) https://github.com/apache/cassandra/pull/1662/files#r891085898

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17458) Test Failure: org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie

2022-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17458:
--
Bug Category: Parent values: Correctness(12982)Level 1 values: 
Consistency(12989)  (was: Parent values: Correctness(12982)Level 1 values: Test 
Failure(12990))

> Test Failure: 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie
> ---
>
> Key: CASSANDRA-17458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17458
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Andres de la Peña
>Assignee: Sathyanarayanan Saravanamuthu
>Priority: Normal
>  Labels: patch-available
> Fix For: 4.1-alpha, 4.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Intermittent failure on 
> {{org.apache.cassandra.db.SinglePartitionSliceCommandTest#testPartitionDeletionRangeDeletionTie}}
>  for trunk:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1024/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1018/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> {code:java}
> Failed 1 times in the last 11 runs. Flakiness: 10%, Stability: 90%
> Error Message
> Expected [Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11]] but got [Marker 
> INCL_START_BOUND(1, 1)@10/1647704834, Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 
> ts=11], Marker INCL_END_BOUND(1, 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> 
> but was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, 
> [[v=1 ts=11]], 
> org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
> Stacktrace
> junit.framework.AssertionFailedError: Expected [Row[info=[ts=11] ]: c1=1, 
> c2=1 | [v=1 ts=11]] but got [Marker INCL_START_BOUND(1, 1)@10/1647704834, 
> Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11], Marker INCL_END_BOUND(1, 
> 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> but 
> was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, [[v=1 
> ts=11]], org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
>   at 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie(SinglePartitionSliceCommandTest.java:463)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Standard Output
> INFO  [main] 2022-03-19 15:51:43,646 YamlConfigurationLoader.java:103 - 
> Configuration location: 
> file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2022-03-19 15:51:43,653 YamlConfigurationLoader.java:124 - 
> Loading settings from file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2022-03-19 15:51:43,971 Config.java:1119 - Node 
> configuration:[allocate_tokens_for_keyspace=null; 
> allocate_tokens_for_local_replication_factor=null; 
> allow_extra_insecure_udfs=false; all
> ...[truncated 192995 chars]...
> ome/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-37-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-39-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-38-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-40-big-Data.db:level=0,
>  ]
> {code}
> Failures can also be hit with CircleCI test multiplexer:
> [https://app.circleci.com/pipelines/github/adelapena/cassandra/1387/workflows/0f37a726-1dc2-4584-86f9-e99ecc40f551]
> CircleCI results show failures on three separate assertions, with a ~3% 
> flakiness.
> The same test looks ok in 4.0, as suggested by Butler and [this repeated 
> Circle 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1388/workflows/6b69d654-3d19-4f2a-aeb9-dc405c6ddd2b].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17458) Test Failure: org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie

2022-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17458:
--
  Fix Version/s: 4.1-alpha
 4.2
 (was: 4.x)
 (was: 4.1-beta)
  Since Version: 4.1-alpha
Source Control Link: 
https://github.com/apache/cassandra/commit/9b4784bdb7d70bf99c9c290d44b053902b00642d
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Test Failure: 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie
> ---
>
> Key: CASSANDRA-17458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17458
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Andres de la Peña
>Assignee: Sathyanarayanan Saravanamuthu
>Priority: Normal
>  Labels: patch-available
> Fix For: 4.1-alpha, 4.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Intermittent failure on 
> {{org.apache.cassandra.db.SinglePartitionSliceCommandTest#testPartitionDeletionRangeDeletionTie}}
>  for trunk:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1024/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1018/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> {code:java}
> Failed 1 times in the last 11 runs. Flakiness: 10%, Stability: 90%
> Error Message
> Expected [Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11]] but got [Marker 
> INCL_START_BOUND(1, 1)@10/1647704834, Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 
> ts=11], Marker INCL_END_BOUND(1, 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> 
> but was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, 
> [[v=1 ts=11]], 
> org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
> Stacktrace
> junit.framework.AssertionFailedError: Expected [Row[info=[ts=11] ]: c1=1, 
> c2=1 | [v=1 ts=11]] but got [Marker INCL_START_BOUND(1, 1)@10/1647704834, 
> Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11], Marker INCL_END_BOUND(1, 
> 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> but 
> was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, [[v=1 
> ts=11]], org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
>   at 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie(SinglePartitionSliceCommandTest.java:463)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Standard Output
> INFO  [main] 2022-03-19 15:51:43,646 YamlConfigurationLoader.java:103 - 
> Configuration location: 
> file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2022-03-19 15:51:43,653 YamlConfigurationLoader.java:124 - 
> Loading settings from file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2022-03-19 15:51:43,971 Config.java:1119 - Node 
> configuration:[allocate_tokens_for_keyspace=null; 
> allocate_tokens_for_local_replication_factor=null; 
> allow_extra_insecure_udfs=false; all
> ...[truncated 192995 chars]...
> ome/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-37-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-39-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-38-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-40-big-Data.db:level=0,
>  ]
> {code}
> Failures can also be hit with CircleCI test multiplexer:
> [https://app.circleci.com/pipelines/github/adelapena/cassandra/1387/workflows/0f37a726-1dc2-4584-86f9-e99ecc40f551]
> CircleCI results show failures on three separate assertions, with a ~3% 
> flakiness.
> The same test looks ok in 4.0, as suggested by Butler and [this repeated 
> Circle 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1388/workflows/6b69d654-3d19-4f2a-aeb9-dc405c6ddd2b].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, 

[jira] [Commented] (CASSANDRA-17458) Test Failure: org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550962#comment-17550962
 ] 

Andres de la Peña commented on CASSANDRA-17458:
---

Thanks, committed to {{cassandra-4.1}} as 
[9b4784bdb7d70bf99c9c290d44b053902b00642d|https://github.com/apache/cassandra/commit/9b4784bdb7d70bf99c9c290d44b053902b00642d]
 and merged to 
[{{trunk}}|https://github.com/apache/cassandra/commit/29fea66c89cc1b378aafbaca8d68d21697e667b7].

> Test Failure: 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie
> ---
>
> Key: CASSANDRA-17458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17458
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Andres de la Peña
>Assignee: Sathyanarayanan Saravanamuthu
>Priority: Normal
>  Labels: patch-available
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Intermittent failure on 
> {{org.apache.cassandra.db.SinglePartitionSliceCommandTest#testPartitionDeletionRangeDeletionTie}}
>  for trunk:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1024/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1018/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> {code:java}
> Failed 1 times in the last 11 runs. Flakiness: 10%, Stability: 90%
> Error Message
> Expected [Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11]] but got [Marker 
> INCL_START_BOUND(1, 1)@10/1647704834, Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 
> ts=11], Marker INCL_END_BOUND(1, 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> 
> but was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, 
> [[v=1 ts=11]], 
> org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
> Stacktrace
> junit.framework.AssertionFailedError: Expected [Row[info=[ts=11] ]: c1=1, 
> c2=1 | [v=1 ts=11]] but got [Marker INCL_START_BOUND(1, 1)@10/1647704834, 
> Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11], Marker INCL_END_BOUND(1, 
> 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> but 
> was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, [[v=1 
> ts=11]], org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
>   at 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie(SinglePartitionSliceCommandTest.java:463)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Standard Output
> INFO  [main] 2022-03-19 15:51:43,646 YamlConfigurationLoader.java:103 - 
> Configuration location: 
> file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2022-03-19 15:51:43,653 YamlConfigurationLoader.java:124 - 
> Loading settings from file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2022-03-19 15:51:43,971 Config.java:1119 - Node 
> configuration:[allocate_tokens_for_keyspace=null; 
> allocate_tokens_for_local_replication_factor=null; 
> allow_extra_insecure_udfs=false; all
> ...[truncated 192995 chars]...
> ome/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-37-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-39-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-38-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-40-big-Data.db:level=0,
>  ]
> {code}
> Failures can also be hit with CircleCI test multiplexer:
> [https://app.circleci.com/pipelines/github/adelapena/cassandra/1387/workflows/0f37a726-1dc2-4584-86f9-e99ecc40f551]
> CircleCI results show failures on three separate assertions, with a ~3% 
> flakiness.
> The same test looks ok in 4.0, as suggested by Butler and [this repeated 
> Circle 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1388/workflows/6b69d654-3d19-4f2a-aeb9-dc405c6ddd2b].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (96d0d658a5 -> 29fea66c89)

2022-06-07 Thread adelapena
This is an automated email from the ASF dual-hosted git repository.

adelapena pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


from 96d0d658a5 Merge branch 'cassandra-4.1' into trunk
 new 9b4784bdb7 Fix missed nowInSec values in QueryProcessor
 new 29fea66c89 Merge branch 'cassandra-4.1' into trunk

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/cql3/QueryProcessor.java | 17 +
 2 files changed, 10 insertions(+), 8 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-4.1 updated: Fix missed nowInSec values in QueryProcessor

2022-06-07 Thread adelapena
This is an automated email from the ASF dual-hosted git repository.

adelapena pushed a commit to branch cassandra-4.1
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-4.1 by this push:
 new 9b4784bdb7 Fix missed nowInSec values in QueryProcessor
9b4784bdb7 is described below

commit 9b4784bdb7d70bf99c9c290d44b053902b00642d
Author: Sathyanarayanan Saravanamuthu 
AuthorDate: Wed May 11 16:21:19 2022 +0100

Fix missed nowInSec values in QueryProcessor

patch by Sathyanarayanan Saravanamuthu; reviewed by Andrés de la Peña, 
Benjamin Lerer and Ekaterina Dimitrova for CASSANDRA-17458

Co-authored-by: Sathyanarayanan Saravanamuthu 
Co-authored-by: Andrés de la Peña 
---
 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/cql3/QueryProcessor.java | 17 +
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/CHANGES.txt b/CHANGES.txt
index 5540de8433..2e7772bf89 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 4.1-alpha2
+ * Fix missed nowInSec values in QueryProcessor (CASSANDRA-17458)
  * Revert removal of withBufferSizeInMB(int size) in CQLSSTableWriter.Builder 
class and deprecate it in favor of withBufferSizeInMiB(int size) 
(CASSANDRA-17675)
  * Remove expired snapshots of dropped tables after restart (CASSANDRA-17619)
 Merged from 4.0:
diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java 
b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
index e14bfacd1e..f0a0a7425f 100644
--- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java
+++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java
@@ -454,11 +454,11 @@ public class QueryProcessor implements QueryHandler
 public static Future executeAsync(InetAddressAndPort 
address, String query, Object... values)
 {
 Prepared prepared = prepareInternal(query);
-QueryOptions options = makeInternalOptions(prepared.statement, values);
+int nowInSec = FBUtilities.nowInSeconds();
+QueryOptions options = 
makeInternalOptionsWithNowInSec(prepared.statement, nowInSec, values);
 if (prepared.statement instanceof SelectStatement)
 {
 SelectStatement select = (SelectStatement) prepared.statement;
-int nowInSec = FBUtilities.nowInSeconds();
 ReadQuery readQuery = select.getQuery(options, nowInSec);
 List commands;
 if (readQuery instanceof ReadCommand)
@@ -528,7 +528,7 @@ public class QueryProcessor implements QueryHandler
 try
 {
 Prepared prepared = prepareInternal(query);
-ResultMessage result = prepared.statement.execute(state, 
makeInternalOptions(prepared.statement, values, cl), nanoTime());
+ResultMessage result = prepared.statement.execute(state, 
makeInternalOptionsWithNowInSec(prepared.statement, state.getNowInSeconds(), 
values, cl), nanoTime());
 if (result instanceof ResultMessage.Rows)
 return 
UntypedResultSet.create(((ResultMessage.Rows)result).result);
 else
@@ -547,7 +547,8 @@ public class QueryProcessor implements QueryHandler
 throw new IllegalArgumentException("Only SELECTs can be paged");
 
 SelectStatement select = (SelectStatement)prepared.statement;
-QueryPager pager = 
select.getQuery(makeInternalOptions(prepared.statement, values), 
FBUtilities.nowInSeconds()).getPager(null, ProtocolVersion.CURRENT);
+int nowInSec = FBUtilities.nowInSeconds();
+QueryPager pager = 
select.getQuery(makeInternalOptionsWithNowInSec(prepared.statement, nowInSec, 
values), nowInSec).getPager(null, ProtocolVersion.CURRENT);
 return UntypedResultSet.create(select, pager, pageSize);
 }
 
@@ -575,7 +576,7 @@ public class QueryProcessor implements QueryHandler
 {
 CQLStatement statement = parseStatement(query, 
queryState.getClientState());
 statement.validate(queryState.getClientState());
-ResultMessage result = statement.executeLocally(queryState, 
makeInternalOptions(statement, values));
+ResultMessage result = statement.executeLocally(queryState, 
makeInternalOptionsWithNowInSec(statement, queryState.getNowInSeconds(), 
values));
 if (result instanceof ResultMessage.Rows)
 return 
UntypedResultSet.create(((ResultMessage.Rows)result).result);
 else
@@ -592,7 +593,7 @@ public class QueryProcessor implements QueryHandler
 Prepared prepared = prepareInternal(query);
 assert prepared.statement instanceof SelectStatement;
 SelectStatement select = (SelectStatement)prepared.statement;
-ResultMessage result = select.executeInternal(internalQueryState(), 
makeInternalOptions(prepared.statement, values), nowInSec, queryStartNanoTime);
+ResultMessage result = select.executeInternal(internalQueryState(), 

[cassandra] 01/01: Merge branch 'cassandra-4.1' into trunk

2022-06-07 Thread adelapena
This is an automated email from the ASF dual-hosted git repository.

adelapena pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 29fea66c89cc1b378aafbaca8d68d21697e667b7
Merge: 96d0d658a5 9b4784bdb7
Author: Andrés de la Peña 
AuthorDate: Tue Jun 7 13:05:59 2022 +0100

Merge branch 'cassandra-4.1' into trunk

 CHANGES.txt|  1 +
 src/java/org/apache/cassandra/cql3/QueryProcessor.java | 17 +
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --cc CHANGES.txt
index 31715d320b,2e7772bf89..d6b4ff5ab9
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,15 -1,15 +1,16 @@@
 -4.1-alpha2
 +4.2
 + * When a node is bootstrapping it gets the whole gossip state but applies in 
random order causing some cases where StorageService will fail causing an 
instance to not show up in TokenMetadata (CASSANDRA-17676)
 + * Add CQLSH command SHOW REPLICAS (CASSANDRA-17577)
 + * Add guardrail to allow disabling of SimpleStrategy (CASSANDRA-17647)
 + * Change default directory permission to 750 in packaging (CASSANDRA-17470)
 + * Adding support for TLS client authentication for internode communication 
(CASSANDRA-17513)
 + * Add new CQL function maxWritetime (CASSANDRA-17425)
 + * Add guardrail for ALTER TABLE ADD / DROP / REMOVE column operations 
(CASSANDRA-17495)
 + * Rename DisableFlag class to EnableFlag on guardrails (CASSANDRA-17544)
 +Merged from 4.1:
+  * Fix missed nowInSec values in QueryProcessor (CASSANDRA-17458)
   * Revert removal of withBufferSizeInMB(int size) in CQLSSTableWriter.Builder 
class and deprecate it in favor of withBufferSizeInMiB(int size) 
(CASSANDRA-17675)
   * Remove expired snapshots of dropped tables after restart (CASSANDRA-17619)
 -Merged from 4.0:
 - * Ensure FileStreamTask cannot compromise shared channel proxy for system 
table when interrupted (CASSANDRA-17663)
 - * silence benign SslClosedEngineException (CASSANDRA-17565)
 -Merged from 3.11:
 -Merged from 3.0:
 -
 -
 -4.1-alpha1
   * Handle config parameters upper bound on startup; Fix auto_snapshot_ttl and 
paxos_purge_grace_period min unit validations (CASSANDRA-17571)
   * Fix leak of non-standard Java types in our Exceptions as clients using JMX 
are unable to handle them.
 Remove useless validation that leads to unnecessary additional read of 
cassandra.yaml on startup (CASSANDRA-17638)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17458) Test Failure: org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie

2022-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17458:
--
Reviewers: Andres de la Peña, Benjamin Lerer, Ekaterina Dimitrova  (was: 
Andres de la Peña, Ekaterina Dimitrova)

> Test Failure: 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie
> ---
>
> Key: CASSANDRA-17458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17458
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Andres de la Peña
>Assignee: Sathyanarayanan Saravanamuthu
>Priority: Normal
>  Labels: patch-available
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Intermittent failure on 
> {{org.apache.cassandra.db.SinglePartitionSliceCommandTest#testPartitionDeletionRangeDeletionTie}}
>  for trunk:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1024/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1018/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> {code:java}
> Failed 1 times in the last 11 runs. Flakiness: 10%, Stability: 90%
> Error Message
> Expected [Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11]] but got [Marker 
> INCL_START_BOUND(1, 1)@10/1647704834, Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 
> ts=11], Marker INCL_END_BOUND(1, 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> 
> but was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, 
> [[v=1 ts=11]], 
> org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
> Stacktrace
> junit.framework.AssertionFailedError: Expected [Row[info=[ts=11] ]: c1=1, 
> c2=1 | [v=1 ts=11]] but got [Marker INCL_START_BOUND(1, 1)@10/1647704834, 
> Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11], Marker INCL_END_BOUND(1, 
> 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> but 
> was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, [[v=1 
> ts=11]], org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
>   at 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie(SinglePartitionSliceCommandTest.java:463)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Standard Output
> INFO  [main] 2022-03-19 15:51:43,646 YamlConfigurationLoader.java:103 - 
> Configuration location: 
> file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2022-03-19 15:51:43,653 YamlConfigurationLoader.java:124 - 
> Loading settings from file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2022-03-19 15:51:43,971 Config.java:1119 - Node 
> configuration:[allocate_tokens_for_keyspace=null; 
> allocate_tokens_for_local_replication_factor=null; 
> allow_extra_insecure_udfs=false; all
> ...[truncated 192995 chars]...
> ome/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-37-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-39-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-38-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-40-big-Data.db:level=0,
>  ]
> {code}
> Failures can also be hit with CircleCI test multiplexer:
> [https://app.circleci.com/pipelines/github/adelapena/cassandra/1387/workflows/0f37a726-1dc2-4584-86f9-e99ecc40f551]
> CircleCI results show failures on three separate assertions, with a ~3% 
> flakiness.
> The same test looks ok in 4.0, as suggested by Butler and [this repeated 
> Circle 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1388/workflows/6b69d654-3d19-4f2a-aeb9-dc405c6ddd2b].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17584) Fix flaky test - org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550938#comment-17550938
 ] 

Andres de la Peña commented on CASSANDRA-17584:
---

Nice investigation, I agree that the problem is {{LoadBroadcaster}} modifying 
{{Gossiper#endpointStateMap}} in the background.

The failure can be reproduced locally by playing with the delays supplied by 
the call to {{scheduleWithFixedDelay}} that is made by 
{{{}LoadBroadcaster#startBroadcasting{}}}. It can also be reproduced without 
any changes with some more iterations in the multiplexer, as it's shown by 
[this 
run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1670/workflows/314282a7-2530-4494-aa98-01b67099c3c2/jobs/17534].

The proposed patch allowing to provide a custom endpoint state map to 
{{GossipInfoTable}} looks good to me, I have only left a couple of minor 
suggestions on the PR.

> Fix flaky test - 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions
> --
>
> Key: CASSANDRA-17584
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17584
> Project: Cassandra
>  Issue Type: Bug
>  Components: Feature/Virtual Tables
>Reporter: Brandon Williams
>Assignee: Stefan Miklosovic
>Priority: Normal
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> h3. Error Message
> ['LOAD' is expected to be null] expected: but was:
> h3. Stacktrace
> junit.framework.AssertionFailedError: ['LOAD' is expected to be null] 
> expected: but was: at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.assertValue(GossipInfoTableTest.java:174)
>  at 
> org.apache.cassandra.db.virtual.GossipInfoTableTest.testSelectAllWithStateTransitions(GossipInfoTableTest.java:96)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17566) Fix flaky test - org.apache.cassandra.distributed.test.repair.ForceRepairTest.force

2022-06-07 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-17566:
-
  Fix Version/s: 4.1
 (was: 4.x)
  Since Version: 4.1-alpha
Source Control Link: 
https://github.com/apache/cassandra/commit/f809b6753cbbd27deab40679b99d956c8193fcf8
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

Committed, thanks.

> Fix flaky test - 
> org.apache.cassandra.distributed.test.repair.ForceRepairTest.force
> ---
>
> Key: CASSANDRA-17566
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17566
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/java
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1-beta, 4.1
>
>
> Seen on jenkins here: 
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1083/testReport/org.apache.cassandra.distributed.test.repair/ForceRepairTest/force_2/]
>  
> and circle here:
> https://app.circleci.com/pipelines/github/driftx/cassandra/440/workflows/42f936c7-2ede-4fbf-957c-5fb4e461dd90/jobs/5160/tests#failed-test-1
> {noformat}
> junit.framework.AssertionFailedError: nodetool command [repair, 
> distributed_test_keyspace, --force, --full] was not successful
> stdout:
> [2022-04-20 15:11:01,402] Starting repair command #2 
> (1701a090-c0bc-11ec-9898-07c796ce6a49), repairing keyspace 
> distributed_test_keyspace with repair options (parallelism: parallel, primary 
> range: false, incremental: false, job threads: 1, ColumnFamilies: [], 
> dataCenters: [], hosts: [], previewKind: NONE, # of ranges: 3, pull repair: 
> false, force repair: true, optimise streams: false, ignore unreplicated 
> keyspaces: false, repairPaxos: true, paxosOnly: false)
> [2022-04-20 15:11:11,406] Repair command #2 failed with error Did not get 
> replies from all endpoints.
> [2022-04-20 15:11:11,408] Repair command #2 finished with error
> stderr:
> error: Repair job has failed with the error message: Repair command #2 failed 
> with error Did not get replies from all endpoints.. Check the logs on the 
> repair participants for further details
> -- StackTrace --
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #2 failed with error Did not get replies from all endpoints.. 
> Check the logs on the repair participants for further details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:137)
>   at 
> org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
>   at 
> javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:275)
>   at 
> javax.management.NotificationBroadcasterSupport$SendNotifJob.run(NotificationBroadcasterSupport.java:352)
>   at 
> org.apache.cassandra.concurrent.ExecutionFailure$1.run(ExecutionFailure.java:124)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17566) Fix flaky test - org.apache.cassandra.distributed.test.repair.ForceRepairTest.force

2022-06-07 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-17566:
-
Status: Ready to Commit  (was: Review In Progress)

> Fix flaky test - 
> org.apache.cassandra.distributed.test.repair.ForceRepairTest.force
> ---
>
> Key: CASSANDRA-17566
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17566
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/java
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1-beta, 4.x
>
>
> Seen on jenkins here: 
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1083/testReport/org.apache.cassandra.distributed.test.repair/ForceRepairTest/force_2/]
>  
> and circle here:
> https://app.circleci.com/pipelines/github/driftx/cassandra/440/workflows/42f936c7-2ede-4fbf-957c-5fb4e461dd90/jobs/5160/tests#failed-test-1
> {noformat}
> junit.framework.AssertionFailedError: nodetool command [repair, 
> distributed_test_keyspace, --force, --full] was not successful
> stdout:
> [2022-04-20 15:11:01,402] Starting repair command #2 
> (1701a090-c0bc-11ec-9898-07c796ce6a49), repairing keyspace 
> distributed_test_keyspace with repair options (parallelism: parallel, primary 
> range: false, incremental: false, job threads: 1, ColumnFamilies: [], 
> dataCenters: [], hosts: [], previewKind: NONE, # of ranges: 3, pull repair: 
> false, force repair: true, optimise streams: false, ignore unreplicated 
> keyspaces: false, repairPaxos: true, paxosOnly: false)
> [2022-04-20 15:11:11,406] Repair command #2 failed with error Did not get 
> replies from all endpoints.
> [2022-04-20 15:11:11,408] Repair command #2 finished with error
> stderr:
> error: Repair job has failed with the error message: Repair command #2 failed 
> with error Did not get replies from all endpoints.. Check the logs on the 
> repair participants for further details
> -- StackTrace --
> java.lang.RuntimeException: Repair job has failed with the error message: 
> Repair command #2 failed with error Did not get replies from all endpoints.. 
> Check the logs on the repair participants for further details
>   at 
> org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:137)
>   at 
> org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
>   at 
> javax.management.NotificationBroadcasterSupport.handleNotification(NotificationBroadcasterSupport.java:275)
>   at 
> javax.management.NotificationBroadcasterSupport$SendNotifJob.run(NotificationBroadcasterSupport.java:352)
>   at 
> org.apache.cassandra.concurrent.ExecutionFailure$1.run(ExecutionFailure.java:124)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-4.1' into trunk

2022-06-07 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 96d0d658a59245994437ac5d975e2228f446373b
Merge: 81f2aeb020 f809b6753c
Author: Brandon Williams 
AuthorDate: Tue Jun 7 05:45:25 2022 -0500

Merge branch 'cassandra-4.1' into trunk

 .../distributed/test/repair/ForceRepairTest.java| 21 +
 1 file changed, 21 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-4.1 updated: Ensure node2 is down before repairing

2022-06-07 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a commit to branch cassandra-4.1
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-4.1 by this push:
 new f809b6753c Ensure node2 is down before repairing
f809b6753c is described below

commit f809b6753cbbd27deab40679b99d956c8193fcf8
Author: Brandon Williams 
AuthorDate: Fri Jun 3 19:55:05 2022 -0500

Ensure node2 is down before repairing

Patch by brandonwilliams; reviewed by dcapwell for CASSANDRA-17566
---
 .../distributed/test/repair/ForceRepairTest.java| 21 +
 1 file changed, 21 insertions(+)

diff --git 
a/test/distributed/org/apache/cassandra/distributed/test/repair/ForceRepairTest.java
 
b/test/distributed/org/apache/cassandra/distributed/test/repair/ForceRepairTest.java
index 479dac3183..946c41e0e2 100644
--- 
a/test/distributed/org/apache/cassandra/distributed/test/repair/ForceRepairTest.java
+++ 
b/test/distributed/org/apache/cassandra/distributed/test/repair/ForceRepairTest.java
@@ -18,10 +18,13 @@
 package org.apache.cassandra.distributed.test.repair;
 
 import java.io.IOException;
+import java.net.UnknownHostException;
 import java.util.Arrays;
 import java.util.List;
+import java.util.concurrent.TimeUnit;
 import java.util.stream.Collectors;
 
+import com.google.common.util.concurrent.Uninterruptibles;
 import org.apache.commons.lang3.ArrayUtils;
 import org.junit.Test;
 
@@ -37,10 +40,13 @@ import 
org.apache.cassandra.distributed.api.SimpleQueryResult;
 import org.apache.cassandra.distributed.shared.AssertUtils;
 import org.apache.cassandra.distributed.shared.ClusterUtils;
 import org.apache.cassandra.distributed.test.TestBaseImpl;
+import org.apache.cassandra.gms.FailureDetector;
 import org.apache.cassandra.io.sstable.format.SSTableReader;
 import org.apache.cassandra.io.sstable.metadata.StatsMetadata;
+import org.apache.cassandra.locator.InetAddressAndPort;
 import org.apache.cassandra.schema.Schema;
 import org.apache.cassandra.schema.TableMetadata;
+import org.apache.cassandra.utils.FBUtilities;
 import org.assertj.core.api.Assertions;
 
 public class ForceRepairTest extends TestBaseImpl
@@ -75,7 +81,22 @@ public class ForceRepairTest extends TestBaseImpl
 for (int i = 0; i < 10; i++)
 cluster.coordinator(1).execute(withKeyspace("INSERT INTO 
%s.tbl (k,v) VALUES (?, ?) USING TIMESTAMP ?"), ConsistencyLevel.ALL, i, i, 
nowInMicro++);
 
+String downAddress = cluster.get(2).callOnInstance(() -> 
FBUtilities.getBroadcastAddressAndPort().getHostAddressAndPort());
 ClusterUtils.stopUnchecked(cluster.get(2));
+cluster.get(1).runOnInstance(() -> {
+InetAddressAndPort neighbor;
+try
+{
+neighbor = InetAddressAndPort.getByName(downAddress);
+}
+catch (UnknownHostException e)
+{
+throw new RuntimeException(e);
+}
+while (FailureDetector.instance.isAlive(neighbor))
+Uninterruptibles.sleepUninterruptibly(500, 
TimeUnit.MILLISECONDS);
+});
+
 
 // repair should fail because node2 is down
 IInvokableInstance node1 = cluster.get(1);


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (81f2aeb020 -> 96d0d658a5)

2022-06-07 Thread brandonwilliams
This is an automated email from the ASF dual-hosted git repository.

brandonwilliams pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


from 81f2aeb020 Merge branch 'cassandra-4.1' into trunk
 new f809b6753c Ensure node2 is down before repairing
 new 96d0d658a5 Merge branch 'cassandra-4.1' into trunk

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../distributed/test/repair/ForceRepairTest.java| 21 +
 1 file changed, 21 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17458) Test Failure: org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie

2022-06-07 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550921#comment-17550921
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17458:
-

Agreed, +1

> Test Failure: 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie
> ---
>
> Key: CASSANDRA-17458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17458
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Andres de la Peña
>Assignee: Sathyanarayanan Saravanamuthu
>Priority: Normal
>  Labels: patch-available
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Intermittent failure on 
> {{org.apache.cassandra.db.SinglePartitionSliceCommandTest#testPartitionDeletionRangeDeletionTie}}
>  for trunk:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1024/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1018/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> {code:java}
> Failed 1 times in the last 11 runs. Flakiness: 10%, Stability: 90%
> Error Message
> Expected [Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11]] but got [Marker 
> INCL_START_BOUND(1, 1)@10/1647704834, Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 
> ts=11], Marker INCL_END_BOUND(1, 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> 
> but was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, 
> [[v=1 ts=11]], 
> org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
> Stacktrace
> junit.framework.AssertionFailedError: Expected [Row[info=[ts=11] ]: c1=1, 
> c2=1 | [v=1 ts=11]] but got [Marker INCL_START_BOUND(1, 1)@10/1647704834, 
> Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11], Marker INCL_END_BOUND(1, 
> 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> but 
> was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, [[v=1 
> ts=11]], org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
>   at 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie(SinglePartitionSliceCommandTest.java:463)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Standard Output
> INFO  [main] 2022-03-19 15:51:43,646 YamlConfigurationLoader.java:103 - 
> Configuration location: 
> file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2022-03-19 15:51:43,653 YamlConfigurationLoader.java:124 - 
> Loading settings from file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2022-03-19 15:51:43,971 Config.java:1119 - Node 
> configuration:[allocate_tokens_for_keyspace=null; 
> allocate_tokens_for_local_replication_factor=null; 
> allow_extra_insecure_udfs=false; all
> ...[truncated 192995 chars]...
> ome/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-37-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-39-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-38-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-40-big-Data.db:level=0,
>  ]
> {code}
> Failures can also be hit with CircleCI test multiplexer:
> [https://app.circleci.com/pipelines/github/adelapena/cassandra/1387/workflows/0f37a726-1dc2-4584-86f9-e99ecc40f551]
> CircleCI results show failures on three separate assertions, with a ~3% 
> flakiness.
> The same test looks ok in 4.0, as suggested by Butler and [this repeated 
> Circle 
> run|https://app.circleci.com/pipelines/github/adelapena/cassandra/1388/workflows/6b69d654-3d19-4f2a-aeb9-dc405c6ddd2b].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17669) CentOS/RHEL installation requires JRE not available in Java 11

2022-06-07 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550917#comment-17550917
 ] 

Brandon Williams commented on CASSANDRA-17669:
--

That isn't a package, but what any package needs to "provide" to satisfy the 
management system.  There are a few options there depending on distro: 
https://rpmfind.net/linux/rpm2html/search.php?query=jre-11

> CentOS/RHEL installation requires JRE not available in Java 11
> --
>
> Key: CASSANDRA-17669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Erick Ramirez
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.1.x
>
>
> h2. Background
> A user [reported on Stack 
> Overflow|https://stackoverflow.com/questions/72377621/] and the DataStax 
> Developers [dtsx.io/discord|https://dtsx.io/discord] an issue with installing 
> Cassandra when only Java 11 is installed.
> h2. Symptoms
> Attempts to install Cassandra using YUM requires Java 8:
> {noformat}
> $ sudo yum install cassandra
> Dependencies resolved.
> 
>  Package  Architecture
> Version  Repository  
> Size
> 
> Installing:
>  cassandranoarch  
> 4.0.4-1  cassandra   
> 45 M
> Installing dependencies:
>  java-1.8.0-openjdk   x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream  
> 341 k
>  java-1.8.0-openjdk-headless  x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream   
> 34 M
> Installing weak dependencies:
>  gtk2 x86_64  
> 2.24.32-5.el8appstream  
> 3.4 M
> Transaction Summary
> 
> Install  4 Packages
> {noformat}
> Similarly, attempts to install the RPM results in:
> {noformat}
> $ sudo rpm -i cassandra-4.0.4-1.noarch.rpm 
> warning: cassandra-4.0.4-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 
> 7e3e87cb: NOKEY
> error: Failed dependencies:
>   jre >= 1.8.0 is needed by cassandra-4.0.4-1.noarch{noformat}
> h2. Root cause
> Package installs on CentOS and RHEL platforms has [a dependency on JRE 
> 1.8+|https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L49]:
> {noformat}
> Requires:  jre >= 1.8.0{noformat}
> However, JRE is no longer available in Java 11. From the [JDK 11 release 
> notes|https://www.oracle.com/java/technologies/javase/11-relnote-issues.html]:
> {quote}In this release, the JRE or Server JRE is no longer offered. Only the 
> JDK is offered.
> {quote}
> h2. Workaround
> Override the dependency check when installing the RPM with the {{--nodeps}} 
> option:
> {noformat}
> $ sudo rpm --nodeps -i cassandra-4.0.4-1.noarch.rpm {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17458) Test Failure: org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie

2022-06-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550918#comment-17550918
 ] 

Andres de la Peña commented on CASSANDRA-17458:
---

That last run has hit two dtest failures. I can't find these test failures in 
Butler, but they can be reproduced in the base branches with the multiplexer:
|4.1|test_optimized_primary_range_repair|[multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1666/workflows/6f925be1-c0df-4b2a-83e0-4612a46f32bd]|
|trunk|TestPreviewRepair::test_preview|[multiplexer|https://app.circleci.com/pipelines/github/adelapena/cassandra/1667/workflows/60ba0ade-7e4e-4728-a7ff-3872f2a1903c]|

In both cases the flakiness is below 1%. I'll open tickets reporting these 
flaky tests. I think we can still commit this fix despite of these unrelated 
failures.

> Test Failure: 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie
> ---
>
> Key: CASSANDRA-17458
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17458
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Andres de la Peña
>Assignee: Sathyanarayanan Saravanamuthu
>Priority: Normal
>  Labels: patch-available
> Fix For: 4.1-beta, 4.x
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Intermittent failure on 
> {{org.apache.cassandra.db.SinglePartitionSliceCommandTest#testPartitionDeletionRangeDeletionTie}}
>  for trunk:
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1024/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> [https://ci-cassandra.apache.org/job/Cassandra-trunk/1018/testReport/org.apache.cassandra.db/SinglePartitionSliceCommandTest/testPartitionDeletionRangeDeletionTie/]
> {code:java}
> Failed 1 times in the last 11 runs. Flakiness: 10%, Stability: 90%
> Error Message
> Expected [Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11]] but got [Marker 
> INCL_START_BOUND(1, 1)@10/1647704834, Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 
> ts=11], Marker INCL_END_BOUND(1, 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> 
> but was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, 
> [[v=1 ts=11]], 
> org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
> Stacktrace
> junit.framework.AssertionFailedError: Expected [Row[info=[ts=11] ]: c1=1, 
> c2=1 | [v=1 ts=11]] but got [Marker INCL_START_BOUND(1, 1)@10/1647704834, 
> Row[info=[ts=11] ]: c1=1, c2=1 | [v=1 ts=11], Marker INCL_END_BOUND(1, 
> 1)@10/1647704834] expected:<[[[v=1 ts=11]]]> but 
> was:<[org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@3db1ed73, [[v=1 
> ts=11]], org.apache.cassandra.db.rows.RangeTombstoneBoundMarker@1ea92553]>
>   at 
> org.apache.cassandra.db.SinglePartitionSliceCommandTest.testPartitionDeletionRangeDeletionTie(SinglePartitionSliceCommandTest.java:463)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Standard Output
> INFO  [main] 2022-03-19 15:51:43,646 YamlConfigurationLoader.java:103 - 
> Configuration location: 
> file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> DEBUG [main] 2022-03-19 15:51:43,653 YamlConfigurationLoader.java:124 - 
> Loading settings from file:/home/cassandra/cassandra/test/conf/cassandra.yaml
> INFO  [main] 2022-03-19 15:51:43,971 Config.java:1119 - Node 
> configuration:[allocate_tokens_for_keyspace=null; 
> allocate_tokens_for_local_replication_factor=null; 
> allow_extra_insecure_udfs=false; all
> ...[truncated 192995 chars]...
> ome/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-37-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-39-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-38-big-Data.db:level=0,
>  
> /home/cassandra/cassandra/build/test/cassandra/data/system/local-7ad54392bcdd35a684174e047860b377/nb-40-big-Data.db:level=0,
>  ]
> {code}
> Failures can also be hit with CircleCI test multiplexer:
> [https://app.circleci.com/pipelines/github/adelapena/cassandra/1387/workflows/0f37a726-1dc2-4584-86f9-e99ecc40f551]
> CircleCI results show failures on three separate assertions, with a ~3% 
> flakiness.
> The same test looks ok in 4.0, as suggested by Butler and [this repeated 
> Circle 
> 

[jira] [Commented] (CASSANDRA-17380) Add support for EXPLAIN statements

2022-06-07 Thread maxwellguo (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550913#comment-17550913
 ] 

maxwellguo commented on CASSANDRA-17380:


I just read the doc, and start to read the realate code .Then a discussion will 
start .

> Add support for EXPLAIN statements
> --
>
> Key: CASSANDRA-17380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17380
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Assignee: maxwellguo
>Priority: Normal
>  Labels: gsoc, gsoc2022
>
> We should provide users a way to understand how their query will be executed 
> and some information on the amount of work that will be performed.
> Explain statements are the most common way to do that.
> A CEP Draft has been open for that: [(DRAFT) CEP-4: 
> Explain|https://docs.google.com/document/d/1s_gc4TDYdDbHnYHHVxxjqVVUn3MONUqG6W2JehnC11g/edit].
>  This draft propose to add support for {{EXPLAIN}} and {{EXPLAIN ANALYZE}} 
> but I believe that we should split the work in 2 parts because a simple 
> {{EXPLAIN}} would already provide relevant information.
> To complete this work I believe that the following steps will be required:
> * Rework and submit the CEP
> * Add missing statistics
> * Implements the logic behind the EXPLAIN statements



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17380) Add support for EXPLAIN statements

2022-06-07 Thread Ruslan Fomkin (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550881#comment-17550881
 ] 

Ruslan Fomkin commented on CASSANDRA-17380:
---

What is the status of CEP? I see that it is not open for discussion, but the 
last comment from [~maxwellguo]  confuses me. I am wonder why it is suggested 
to call the operator EXPLAIN, which is well understood operator in databases to 
return query execution plans. When and where can it be discussed?

> Add support for EXPLAIN statements
> --
>
> Key: CASSANDRA-17380
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17380
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Benjamin Lerer
>Assignee: maxwellguo
>Priority: Normal
>  Labels: gsoc, gsoc2022
>
> We should provide users a way to understand how their query will be executed 
> and some information on the amount of work that will be performed.
> Explain statements are the most common way to do that.
> A CEP Draft has been open for that: [(DRAFT) CEP-4: 
> Explain|https://docs.google.com/document/d/1s_gc4TDYdDbHnYHHVxxjqVVUn3MONUqG6W2JehnC11g/edit].
>  This draft propose to add support for {{EXPLAIN}} and {{EXPLAIN ANALYZE}} 
> but I believe that we should split the work in 2 parts because a simple 
> {{EXPLAIN}} would already provide relevant information.
> To complete this work I believe that the following steps will be required:
> * Rework and submit the CEP
> * Add missing statistics
> * Implements the logic behind the EXPLAIN statements



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14715) Read repairs can result in bogus timeout errors to the client

2022-06-07 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550869#comment-17550869
 ] 

Stefan Miklosovic commented on CASSANDRA-14715:
---

I am on it.

> Read repairs can result in bogus timeout errors to the client
> -
>
> Key: CASSANDRA-14715
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14715
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
>Reporter: Cameron Zemek
>Assignee: Stefan Miklosovic
>Priority: Low
>
> In RepairMergeListener:close() it does the following:
>  
> {code:java}
> try
> {
> FBUtilities.waitOnFutures(repairResults, 
> DatabaseDescriptor.getWriteRpcTimeout());
> }
> catch (TimeoutException ex)
> {
> // We got all responses, but timed out while repairing
> int blockFor = consistency.blockFor(keyspace);
> if (Tracing.isTracing())
> Tracing.trace("Timed out while read-repairing after receiving all {} 
> data and digest responses", blockFor);
> else
> logger.debug("Timeout while read-repairing after receiving all {} 
> data and digest responses", blockFor);
> throw new ReadTimeoutException(consistency, blockFor-1, blockFor, true);
> }
> {code}
> This propagates up and gets sent to the client and we have customers get 
> confused cause they see timeouts for CL ALL requiring ALL replicas even 
> though they have read_repair_chance = 0 and using a LOCAL_* CL.
> At minimum I suggest instead of using the consistency level of DataResolver 
> (which is always ALL with read repairs) for the timeout it instead use 
> repairResults.size(). That is blockFor = repairResults.size() . But saying it 
> received _blockFor - 1_ is bogus still. Fixing that would require more 
> changes. I was thinking maybe like so:
>  
> {code:java}
> public static void waitOnFutures(List results, long ms, 
> MutableInt counter) throws TimeoutException
> {
> for (AsyncOneResponse result : results)
> {
> result.get(ms, TimeUnit.MILLISECONDS);
> counter.increment();
> }
> }
> {code}
>  
>  
>  
> Likewise in SinglePartitionReadLifecycle:maybeAwaitFullDataRead() it says 
> _blockFor - 1_ for how many were received, which is also bogus.
>  
> Steps used to reproduce was modify RepairMergeListener:close() to always 
> throw timeout exception.  With schema:
> {noformat}
> CREATE KEYSPACE weather WITH replication = {'class': 
> 'NetworkTopologyStrategy', 'dc1': '3', 'dc2': '3'}  AND durable_writes = true;
> CREATE TABLE weather.city (
> cityid int PRIMARY KEY,
> name text
> ) WITH bloom_filter_fp_chance = 0.01
> AND dclocal_read_repair_chance = 0.0
> AND read_repair_chance = 0.0
> AND speculative_retry = 'NONE';
> {noformat}
> Then using the following steps:
>  # ccm node1 cqlsh
>  # INSERT INTO weather.city(cityid, name) VALUES (1, 'Canberra');
>  # exit;
>  # ccm node1 flush
>  # ccm node1 stop
>  # rm -rf 
> ~/.ccm/test_repair/node1/data0/weather/city-ff2fade0b18d11e8b1cd097acbab1e3d/mc-1-big-*
>  # remove the sstable with the insert
>  # ccm node1 start
>  # ccm node1 cqlsh
>  # CONSISTENCY LOCAL_QUORUM;
>  # select * from weather.city where cityid = 1;
> You get result of:
> {noformat}
> ReadTimeout: Error from server: code=1200 [Coordinator node timed out waiting 
> for replica nodes' responses] message="Operation timed out - received only 5 
> responses." info={'received_responses': 5, 'required_responses': 6, 
> 'consistency': 'ALL'}{noformat}
> But was expecting:
> {noformat}
> ReadTimeout: Error from server: code=1200 [Coordinator node timed out waiting 
> for replica nodes' responses] message="Operation timed out - received only 1 
> responses." info={'received_responses': 1, 'required_responses': 2, 
> 'consistency': 'LOCAL_QUORUM'}{noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17669) CentOS/RHEL installation requires JRE not available in Java 11

2022-06-07 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550866#comment-17550866
 ] 

Berenguer Blasi commented on CASSANDRA-17669:
-

Where did you see the jre-11 for rpms? I only found builds for Windows

> CentOS/RHEL installation requires JRE not available in Java 11
> --
>
> Key: CASSANDRA-17669
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17669
> Project: Cassandra
>  Issue Type: Bug
>  Components: Dependencies
>Reporter: Erick Ramirez
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x, 4.1-beta, 4.1.x
>
>
> h2. Background
> A user [reported on Stack 
> Overflow|https://stackoverflow.com/questions/72377621/] and the DataStax 
> Developers [dtsx.io/discord|https://dtsx.io/discord] an issue with installing 
> Cassandra when only Java 11 is installed.
> h2. Symptoms
> Attempts to install Cassandra using YUM requires Java 8:
> {noformat}
> $ sudo yum install cassandra
> Dependencies resolved.
> 
>  Package  Architecture
> Version  Repository  
> Size
> 
> Installing:
>  cassandranoarch  
> 4.0.4-1  cassandra   
> 45 M
> Installing dependencies:
>  java-1.8.0-openjdk   x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream  
> 341 k
>  java-1.8.0-openjdk-headless  x86_64  
> 1:1.8.0.312.b07-2.el8_5  appstream   
> 34 M
> Installing weak dependencies:
>  gtk2 x86_64  
> 2.24.32-5.el8appstream  
> 3.4 M
> Transaction Summary
> 
> Install  4 Packages
> {noformat}
> Similarly, attempts to install the RPM results in:
> {noformat}
> $ sudo rpm -i cassandra-4.0.4-1.noarch.rpm 
> warning: cassandra-4.0.4-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 
> 7e3e87cb: NOKEY
> error: Failed dependencies:
>   jre >= 1.8.0 is needed by cassandra-4.0.4-1.noarch{noformat}
> h2. Root cause
> Package installs on CentOS and RHEL platforms has [a dependency on JRE 
> 1.8+|https://github.com/apache/cassandra/blob/trunk/redhat/cassandra.spec#L49]:
> {noformat}
> Requires:  jre >= 1.8.0{noformat}
> However, JRE is no longer available in Java 11. From the [JDK 11 release 
> notes|https://www.oracle.com/java/technologies/javase/11-relnote-issues.html]:
> {quote}In this release, the JRE or Server JRE is no longer offered. Only the 
> JDK is offered.
> {quote}
> h2. Workaround
> Override the dependency check when installing the RPM with the {{--nodeps}} 
> option:
> {noformat}
> $ sudo rpm --nodeps -i cassandra-4.0.4-1.noarch.rpm {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17670) Flaky CompactStorageTest

2022-06-07 Thread Berenguer Blasi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Berenguer Blasi updated CASSANDRA-17670:

  Since Version: 4.1
Source Control Link: 
https://github.com/apache/cassandra/commit/3dc30eb45ef52368520102f471d53061676e72cc
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Flaky CompactStorageTest
> 
>
> Key: CASSANDRA-17670
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17670
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Berenguer Blasi
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1.x, 4.x
>
>
> CompactStorageTest has been showing flaky behavior mainly due to timeouts 
> such as 
> [here|https://ci-cassandra.apache.org/job/Cassandra-4.1/43/testReport/org.apache.cassandra.cql3.validation.operations/CompactStorageTest/testAlterWithCompactNonStaticFormat/]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-4.1' into trunk

2022-06-07 Thread bereng
This is an automated email from the ASF dual-hosted git repository.

bereng pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 81f2aeb02062f7ca527bdd918516692600289e15
Merge: 6247c9d966 3dc30eb45e
Author: Bereng 
AuthorDate: Tue Jun 7 09:20:07 2022 +0200

Merge branch 'cassandra-4.1' into trunk

 .../operations/CompactStorageSplit1Test.java   | 2400 
 ...rageTest.java => CompactStorageSplit2Test.java} | 2383 +--
 2 files changed, 2407 insertions(+), 2376 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (6247c9d966 -> 81f2aeb020)

2022-06-07 Thread bereng
This is an automated email from the ASF dual-hosted git repository.

bereng pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git


from 6247c9d966 jvm-dtest upgrade tests run all supported pairs of upgrades 
between from/to but does not actually test all patches from/to
 new 3dc30eb45e Flaky CompactStorageTest
 new 81f2aeb020 Merge branch 'cassandra-4.1' into trunk

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../operations/CompactStorageSplit1Test.java   | 2400 
 ...rageTest.java => CompactStorageSplit2Test.java} | 2383 +--
 2 files changed, 2407 insertions(+), 2376 deletions(-)
 create mode 100644 
test/unit/org/apache/cassandra/cql3/validation/operations/CompactStorageSplit1Test.java
 rename 
test/unit/org/apache/cassandra/cql3/validation/operations/{CompactStorageTest.java
 => CompactStorageSplit2Test.java} (56%)


-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17677) Fix BulkLoader to load entireSSTableThrottle and entireSSTableInterDcThrottle

2022-06-07 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550838#comment-17550838
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17677:
-

{quote}bq There seems to be some 
[utests_system_keyspace_directory1597|https://app.circleci.com/pipelines/github/frankgh/cassandra/76/workflows/baf37eb4-aee5-400b-b08f-20f7972088ab/jobs/1597]
 unrelated failing tests.
{quote}
I can confirm that those are known failures - CASSANDRA-17489
 

> Fix BulkLoader to load  entireSSTableThrottle and entireSSTableInterDcThrottle
> --
>
> Key: CASSANDRA-17677
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17677
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/bulk load
>Reporter: Ekaterina Dimitrova
>Assignee: Francisco Guerrero
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{entire_sstable_stream_throughput_outbound and 
> entire_sstable_inter_dc_stream_throughput_outbound}} were introduced in 
> CASSANDRA-17065.They were added to the LoaderOptions class but they are not 
> loaded in BulkLoader as {{throttle}} and {{interDcThrottle are. }}{{As part 
> of this ticket we need to fix the BulkLoader, also those properties should be 
> advertised as MiB/s, not megabits/s. This was not changed in CASSANDRA-15234 
> for the bulk loader because those are not loaded and those variables in 
> LoaderOptions are disconnected from the Cassandra config parameters and 
> unused at the moment. }}
> It will be good also to update the doc here - 
> [https://cassandra.apache.org/doc/latest/cassandra/operating/bulk_loading.html,|https://cassandra.apache.org/doc/latest/cassandra/operating/bulk_loading.html]
> {{and add a test that those are loaded properly when used with the 
> BulkLoader. }}
> {{CC [~frankgh] }}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17664) Retry failed stage jobs in jenkins

2022-06-07 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-17664:
---
Resolution: (was: Fixed)
Status: Open  (was: Resolved)

Retry is working well for artifact build failures, but not the other failures 
(because of `propagate: false`)

> Retry failed stage jobs in jenkins
> --
>
> Key: CASSANDRA-17664
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17664
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CI
>Reporter: Michael Semb Wever
>Assignee: Michael Semb Wever
>Priority: Normal
> Fix For: 2.2.20, 3.0.28, 3.11.14, 4.0.5, 4.1, 4.2
>
>
> To avoid failing pipeline builds on CI infrastructure faults (disks, network, 
> etc), retry stage jobs three times before marking them as FAILURE.
> Intention is not to retry on UNSTABLE (tests failing).
> This has already been done (and tested) for devbranch pipeline (pre-commit) 
> [here|https://github.com/apache/cassandra-builds/pull/72].



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17625) Test Failure: dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from Cassandra dtests)

2022-06-07 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17550812#comment-17550812
 ] 

Berenguer Blasi commented on CASSANDRA-17625:
-

[~jmckenzie] apologies for the late reply. I've been OOO.

Yes 1y ago I was proposing raising and/or reworking timeouts. I did create a 
branch and ran some tests and everything looked good. I think it would indeed 
be a good idea to raise that point once CI is back on track.

Regarding this particular ticket I am not sure if you're saying you've 
reviewed, +1'ed and are happy I merge?

> Test Failure: 
> dtest-offheap.auth_test.TestAuth.test_system_auth_ks_is_alterable (from 
> Cassandra dtests)
> ---
>
> Key: CASSANDRA-17625
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17625
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/dtest/python
>Reporter: Josh McKenzie
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.1-beta, 4.1.x, 4.x
>
>
> Flaked a couple times on 4.1
> {code}
> Error Message
> cassandra.DriverException: Keyspace metadata was not refreshed. See log for 
> details.
> {code}
> https://ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/
> Nightlies archive if above dropped: 
> https://nightlies.apache.org/cassandra/ci-cassandra.apache.org/job/Cassandra-4.1/14/testReport/dtest-offheap.auth_test/TestAuth/test_system_auth_ks_is_alterable/



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org