[jira] [Comment Edited] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835164#comment-17835164 ] Maxwell Guo edited comment on CASSANDRA-19448 at 4/9/24 6:27 AM: - [~brandon.williams] I fixed the failure and remove the precision that you have mentioned . But the CI may be slow because of the limited resource .:( ||Heading 1||Heading 2|| |[4.0|https://github.com/apache/cassandra/pull/3238]| [java8|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/571/workflows/bb88be9e-4745-4ed0-b7d1-a101cd76c913] [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/571/workflows/dfb90672-e5a8-44bf-9d14-e3d5e5bb3934]| |[4.1|https://github.com/apache/cassandra/pull/3237]| [java8|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/570/workflows/c9d44874-9ddd-4618-b464-136c8c94e3f7] [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/570/workflows/dc8e1c13-84c8-4032-9a03-34f7220a7379]| |[5.0|https://github.com/apache/cassandra/pull/3236]| [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/569/workflows/2221edd7-60bc-4c5d-bccc-a6bea1abc7ba] [java17|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/569/workflows/6b94b590-bdbf-4250-a7f7-f6b78c38292d] | |[trunk|https://github.com/apache/cassandra/pull/3215]| [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/568/workflows/12d9bbbd-fb19-477d-92e0-5534e0da652e] [java17|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/568/workflows/5aad317f-cf3b-411a-8c68-8910cb52d005] | was (Author: maxwellguo): [~brandon.williams] I fixed the failure and remove the precision that you have mentioned 。 ||Heading 1||Heading 2|| |[4.0|https://github.com/apache/cassandra/pull/3238]| [java8|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/571/workflows/bb88be9e-4745-4ed0-b7d1-a101cd76c913] [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/571/workflows/dfb90672-e5a8-44bf-9d14-e3d5e5bb3934]| |[4.1|https://github.com/apache/cassandra/pull/3237]| [java8|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/570/workflows/c9d44874-9ddd-4618-b464-136c8c94e3f7] [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/570/workflows/dc8e1c13-84c8-4032-9a03-34f7220a7379]| |[5.0|https://github.com/apache/cassandra/pull/3236]| [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/569/workflows/2221edd7-60bc-4c5d-bccc-a6bea1abc7ba] [java17|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/569/workflows/6b94b590-bdbf-4250-a7f7-f6b78c38292d] | |[trunk|https://github.com/apache/cassandra/pull/3215]| [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/568/workflows/12d9bbbd-fb19-477d-92e0-5534e0da652e] [java17|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/568/workflows/5aad317f-cf3b-411a-8c68-8910cb52d005] | > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent
[jira] [Commented] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835164#comment-17835164 ] Maxwell Guo commented on CASSANDRA-19448: - [~brandon.williams] I fixed the failure and remove the precision that you have mentioned 。 ||Heading 1||Heading 2|| |[4.0|https://github.com/apache/cassandra/pull/3238]| [java8|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/571/workflows/bb88be9e-4745-4ed0-b7d1-a101cd76c913] [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/571/workflows/dfb90672-e5a8-44bf-9d14-e3d5e5bb3934]| |[4.1|https://github.com/apache/cassandra/pull/3237]| [java8|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/570/workflows/c9d44874-9ddd-4618-b464-136c8c94e3f7] [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/570/workflows/dc8e1c13-84c8-4032-9a03-34f7220a7379]| |[5.0|https://github.com/apache/cassandra/pull/3236]| [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/569/workflows/2221edd7-60bc-4c5d-bccc-a6bea1abc7ba] [java17|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/569/workflows/6b94b590-bdbf-4250-a7f7-f6b78c38292d] | |[trunk|https://github.com/apache/cassandra/pull/3215]| [java11|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/568/workflows/12d9bbbd-fb19-477d-92e0-5534e0da652e] [java17|https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/568/workflows/5aad317f-cf3b-411a-8c68-8910cb52d005] | > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-19498) Error reading data from credential file
[ https://issues.apache.org/jira/browse/CASSANDRA-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835148#comment-17835148 ] Slava edited comment on CASSANDRA-19498 at 4/9/24 5:43 AM: --- I made the recommended fixes. Thank you for your comment was (Author: JIRAUSER304772): I have made the recommended fixes. Thank you for your comment > Error reading data from credential file > --- > > Key: CASSANDRA-19498 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19498 > Project: Cassandra > Issue Type: Bug > Components: Documentation, Tool/cqlsh >Reporter: Slava >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > The pylib/cqlshlib/cqlshmain.py code reads data from the credentials file, > however, it is immediately ignored. > https://github.com/apache/cassandra/blob/c9625e0102dab66f41d3ef2338c54d499e73a8c5/pylib/cqlshlib/cqlshmain.py#L2070 > {code:java} > if not options.username: > credentials = configparser.ConfigParser() > if options.credentials is not None: > credentials.read(options.credentials) # use the username > from credentials file but fallback to cqlshrc if username is absent from the > command line parameters > options.username = username_from_cqlshrc if not options.password: > rawcredentials = configparser.RawConfigParser() > if options.credentials is not None: > rawcredentials.read(options.credentials) # handling > password in the same way as username, priority cli > credentials > cqlshrc > options.password = option_with_default(rawcredentials.get, > 'plain_text_auth', 'password', password_from_cqlshrc) > options.password = password_from_cqlshrc{code} > These corrections have been made in accordance with > https://issues.apache.org/jira/browse/CASSANDRA-16983 and > https://issues.apache.org/jira/browse/CASSANDRA-16456. > The documentation does not indicate that AuthProviders can be used in the > cqlshrc and credentials files. > I propose to return the ability to use the legacy option of specifying the > user and password in the credentials file in the [plain_text_auth] section. > It is also required to describe the rules for using the credentials file in > the documentation. > I can make a corresponding pull request. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19498) Error reading data from credential file
[ https://issues.apache.org/jira/browse/CASSANDRA-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835148#comment-17835148 ] Slava commented on CASSANDRA-19498: --- I have made the recommended fixes. Thank you for your comment > Error reading data from credential file > --- > > Key: CASSANDRA-19498 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19498 > Project: Cassandra > Issue Type: Bug > Components: Documentation, Tool/cqlsh >Reporter: Slava >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > The pylib/cqlshlib/cqlshmain.py code reads data from the credentials file, > however, it is immediately ignored. > https://github.com/apache/cassandra/blob/c9625e0102dab66f41d3ef2338c54d499e73a8c5/pylib/cqlshlib/cqlshmain.py#L2070 > {code:java} > if not options.username: > credentials = configparser.ConfigParser() > if options.credentials is not None: > credentials.read(options.credentials) # use the username > from credentials file but fallback to cqlshrc if username is absent from the > command line parameters > options.username = username_from_cqlshrc if not options.password: > rawcredentials = configparser.RawConfigParser() > if options.credentials is not None: > rawcredentials.read(options.credentials) # handling > password in the same way as username, priority cli > credentials > cqlshrc > options.password = option_with_default(rawcredentials.get, > 'plain_text_auth', 'password', password_from_cqlshrc) > options.password = password_from_cqlshrc{code} > These corrections have been made in accordance with > https://issues.apache.org/jira/browse/CASSANDRA-16983 and > https://issues.apache.org/jira/browse/CASSANDRA-16456. > The documentation does not indicate that AuthProviders can be used in the > cqlshrc and credentials files. > I propose to return the ability to use the legacy option of specifying the > user and password in the credentials file in the [plain_text_auth] section. > It is also required to describe the rules for using the credentials file in > the documentation. > I can make a corresponding pull request. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19335) Default nodetool tablestats to Human-Readable Output
[ https://issues.apache.org/jira/browse/CASSANDRA-19335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835111#comment-17835111 ] Leo Toff commented on CASSANDRA-19335: -- [~brandon.williams] I'm having a rather difficult time replicating the CI issue locally (see [circleci|https://app.circleci.com/pipelines/github/driftx/cassandra/1475/workflows/1dcedb48-abf2-4b45-ab0f-787830bfb21b/jobs/72781/tests]). How do you guys run the CI pipeline locally against your own C* code? > Default nodetool tablestats to Human-Readable Output > > > Key: CASSANDRA-19335 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19335 > Project: Cassandra > Issue Type: Improvement > Components: Tool/nodetool >Reporter: Leo Toff >Assignee: Leo Toff >Priority: Low > Fix For: 5.x > > Time Spent: 50m > Remaining Estimate: 0h > > *Current Behavior* > The current implementation of nodetool tablestats in Apache Cassandra outputs > statistics in a format that is not immediately human-readable. This output > primarily includes raw byte counts, which require additional calculation or > conversion to be easily understood by users. This can be inefficient and > time-consuming, especially for users who frequently monitor these statistics > for performance tuning or maintenance purposes. > *Proposed Change* > We propose that nodetool tablestats should, by default, provide its output in > a human-readable format. This change would involve converting byte counts > into more understandable units (KiB, MiB, GiB). The tool could still retain > the option to display raw data for those who need it, perhaps through a flag > such as --no-human-readable or --raw. > *Considerations* > The change should maintain backward compatibility, ensuring that scripts or > tools relying on the current output format can continue to function correctly. > We should provide adequate documentation and examples of both the new default > output and how to access the raw data format, if needed. > *Alignment* > Discussion in the dev mailing list: > [https://lists.apache.org/thread/mlp715kxho5b6f1ql9omlzmmnh4qfby9] > *Related work* > Previous work in the series: > # https://issues.apache.org/jira/browse/CASSANDRA-19015 > # https://issues.apache.org/jira/browse/CASSANDRA-19104 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19545) Null pointer when running Upgrade Tests
[ https://issues.apache.org/jira/browse/CASSANDRA-19545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ConfX updated CASSANDRA-19545: -- Description: h2. What happened The `UpgradeTestBase.java` may throw null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crash all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) // <--- Here might cause NPE .stream() .findFirst() .orElseThrow(() -> new RuntimeException("No " + version + " versions found")); } {code} h2. How to reproduce To reproduce this bug, I'm running Cassandra with commit SHA `310d790ce4734727f943225eb951ab0d889c0a5b`; and dtest API with `dtest-api-0.0.16.jar`. The versions I put under `build/` directory are: dtest-4.0.9.jar, dtest-4.0.13.jar, dtest-4.1.4.jar, and dtest-5.1.jar. The command I'm running is: {code:java} $ ant test-jvm-dtest-some -Duse.jdk11=true -Dtest.name=org.apache.cassandra.distributed.upgrade.UpgradeTest {code} The error message I got was: {code:java} [junit-timeout] INFO [main] 2024-04-08 17:34:23,936 Versions.java:136 - Looking for dtest jars in /Users/xxx/Documents/xxx/cassandra/build [junit-timeout] Found 4.0.13, 4.0.9 [junit-timeout] Found 4.1.4 [junit-timeout] Found 5.1 [junit-timeout] - --- [junit-timeout] Testcase: simpleUpgradeWithNetworkAndGossipTest(org.apache.cassandra.distributed.upgrade.UpgradeTest)-_jdk11: Caused an ERROR [junit-timeout] null [junit-timeout] java.lang.NullPointerException [junit-timeout] at org.apache.cassandra.distributed.shared.Versions.getLatest(Versions.java:127) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesTo(UpgradeTestBase.java:218) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesToCurrentFrom(UpgradeTestBase.java:203) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTest.simpleUpgradeWithNetworkAndGossipTest(UpgradeTest.java:37) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit-timeout] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit-timeout] [junit-timeout] [junit-timeout] Test org.apache.cassandra.distributed.upgrade.UpgradeTest FAILED {code} With some debugging, the version causing the null pointer is `5.0-alpha1`, but this version is not shown in `build/` directory and should not be tested if I understand correctly. h2. How to fix. There are two ways to fix this problem. One is to add a null pointer checker in `UpgradeTestBase#upgradesTo()`, and the other approach is to add the null pointer in `Versions#getLatest()`. I would love to provide a PR to fix this issue if you can tell me which fix looks better to you. was: h2. What happened The `UpgradeTestBase.java` may throw null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crash all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst()
[jira] [Updated] (CASSANDRA-19545) Null pointer when running Upgrade Tests
[ https://issues.apache.org/jira/browse/CASSANDRA-19545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ConfX updated CASSANDRA-19545: -- Description: h2. What happened The `UpgradeTestBase.java` may throw null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crash all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst() .orElseThrow(() -> new RuntimeException("No " + version + " versions found")); } {code} h2. How to reproduce To reproduce this bug, I'm running Cassandra with commit SHA `310d790ce4734727f943225eb951ab0d889c0a5b`; and dtest API with `dtest-api-0.0.16.jar`. The versions I put under `build/` directory are: dtest-4.0.9.jar, dtest-4.0.13.jar, dtest-4.1.4.jar, and dtest-5.1.jar. The command I'm running is: {code:java} $ ant test-jvm-dtest-some -Duse.jdk11=true -Dtest.name=org.apache.cassandra.distributed.upgrade.UpgradeTest {code} The error message I got was: {code:java} [junit-timeout] INFO [main] 2024-04-08 17:34:23,936 Versions.java:136 - Looking for dtest jars in /Users/xxx/Documents/xxx/cassandra/build [junit-timeout] Found 4.0.13, 4.0.9 [junit-timeout] Found 4.1.4 [junit-timeout] Found 5.1 [junit-timeout] - --- [junit-timeout] Testcase: simpleUpgradeWithNetworkAndGossipTest(org.apache.cassandra.distributed.upgrade.UpgradeTest)-_jdk11: Caused an ERROR [junit-timeout] null [junit-timeout] java.lang.NullPointerException [junit-timeout] at org.apache.cassandra.distributed.shared.Versions.getLatest(Versions.java:127) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesTo(UpgradeTestBase.java:218) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesToCurrentFrom(UpgradeTestBase.java:203) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTest.simpleUpgradeWithNetworkAndGossipTest(UpgradeTest.java:37) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit-timeout] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit-timeout] [junit-timeout] [junit-timeout] Test org.apache.cassandra.distributed.upgrade.UpgradeTest FAILED {code} With some debugging, the version causing the null pointer is `5.0-alpha1`, but this version is not shown in `build/` directory and should not be tested if I understand correctly. h2. How to fix. There are two ways to fix this problem. One is to add a null pointer checker in `UpgradeTestBase#upgradesTo()`, and the other approach is to add the null pointer in `Versions#getLatest()`. I would love to provide a PR to fix this issue if you can tell me which fix looks better to you. was: h2. What happened The `UpgradeTestBase.java` may throw null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crash all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst() .orElseThrow(()
[jira] [Updated] (CASSANDRA-19545) Null pointer when running Upgrade Tests
[ https://issues.apache.org/jira/browse/CASSANDRA-19545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ConfX updated CASSANDRA-19545: -- Description: h2. What happened The `UpgradeTestBase.java` may throw null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crash all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst() .orElseThrow(() -> new RuntimeException("No " + version + " versions found")); } {code} h2. How to reproduce To reproduce this bug, I'm running Cassandra with commit SHA `310d790ce4734727f943225eb951ab0d889c0a5b`; and dtest API with `dtest-api-0.0.16.jar`. The versions I put under `build/` directory are: dtest-4.0.9.jar, dtest-4.0.13.jar, dtest-4.1.4.jar, and dtest-5.1.jar. The command I'm running is: {code:java} $ ant test-jvm-dtest-some -Duse.jdk11=true -Dtest.name=org.apache.cassandra.distributed.upgrade.UpgradeTest {code} The error message I got was: {code:java} [junit-timeout] INFO [main] 2024-04-08 17:34:23,936 Versions.java:136 - Looking for dtest jars in /Users/xxx/Documents/xxx/cassandra/build [junit-timeout] Found 4.0.13, 4.0.9 [junit-timeout] Found 4.1.4 [junit-timeout] Found 5.1 [junit-timeout] - --- [junit-timeout] Testcase: simpleUpgradeWithNetworkAndGossipTest(org.apache.cassandra.distributed.upgrade.UpgradeTest)-_jdk11: Caused an ERROR [junit-timeout] null [junit-timeout] java.lang.NullPointerException [junit-timeout] at org.apache.cassandra.distributed.shared.Versions.getLatest(Versions.java:127) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesTo(UpgradeTestBase.java:218) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesToCurrentFrom(UpgradeTestBase.java:203) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTest.simpleUpgradeWithNetworkAndGossipTest(UpgradeTest.java:37) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit-timeout] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit-timeout] [junit-timeout] [junit-timeout] Test org.apache.cassandra.distributed.upgrade.UpgradeTest FAILED {code} With some debugging, the version causing the null pointer is `5.0-alpha1`, but this version is not shown in `build/` directory and should not be tested if I understand correctly. h2. How to fix. There are two ways to fix this problem. One is to add a null pointer checker in `UpgradeTestBase#upgradesTo()`, and the other approach is to add the null pointer in `Versions#getLatest()`. I would love to provide a PR to fix this issue if you can tell me which fix looks better to you. was: h2. What happened The `UpgradeTestBase.java` may through null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crashes all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst()
[jira] [Updated] (CASSANDRA-19545) Null pointer when running Upgrade Tests
[ https://issues.apache.org/jira/browse/CASSANDRA-19545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ConfX updated CASSANDRA-19545: -- Description: h2. What happened The `UpgradeTestBase.java` may through null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crashes all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst() .orElseThrow(() -> new RuntimeException("No " + version + " versions found")); } {code} h2. How to reproduce To reproduce this bug, I'm running Cassandra with commit SHA `310d790ce4734727f943225eb951ab0d889c0a5b`; and dtest API with `dtest-api-0.0.16.jar`. The versions I put under `build/` directory are: dtest-4.0.9.jar, dtest-4.0.13.jar, dtest-4.1.4.jar, and dtest-5.1.jar. The command I'm running is: {code:java} $ ant test-jvm-dtest-some -Duse.jdk11=true -Dtest.name=org.apache.cassandra.distributed.upgrade.UpgradeTest {code} The error message I got is: {code:java} [junit-timeout] INFO [main] 2024-04-08 17:34:23,936 Versions.java:136 - Looking for dtest jars in /Users/xxx/Documents/xxx/cassandra/build [junit-timeout] Found 4.0.13, 4.0.9 [junit-timeout] Found 4.1.4 [junit-timeout] Found 5.1 [junit-timeout] - --- [junit-timeout] Testcase: simpleUpgradeWithNetworkAndGossipTest(org.apache.cassandra.distributed.upgrade.UpgradeTest)-_jdk11: Caused an ERROR [junit-timeout] null [junit-timeout] java.lang.NullPointerException [junit-timeout] at org.apache.cassandra.distributed.shared.Versions.getLatest(Versions.java:127) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesTo(UpgradeTestBase.java:218) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesToCurrentFrom(UpgradeTestBase.java:203) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTest.simpleUpgradeWithNetworkAndGossipTest(UpgradeTest.java:37) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit-timeout] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit-timeout] [junit-timeout] [junit-timeout] Test org.apache.cassandra.distributed.upgrade.UpgradeTest FAILED {code} With some debugging, the version that causing the null pointer is `5.0-alpha1`, but this version is not shown in `build/` directory and should not be tested if I understand correctly. h2. How to fix. There are two ways to fix this problem. One is to add a null pointer checker in `UpgradeTestBase#upgradesTo()`, the other approach is to add the null pointer in `Versions#getLatest()`. I would love to provide a PR to fix this issue if you can tell me which fix looks better to you. was: ## What happened The `UpgradeTestBase.java` may through null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crashes all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst()
[jira] [Created] (CASSANDRA-19545) Null pointer when running Upgrade Tests
ConfX created CASSANDRA-19545: - Summary: Null pointer when running Upgrade Tests Key: CASSANDRA-19545 URL: https://issues.apache.org/jira/browse/CASSANDRA-19545 Project: Cassandra Issue Type: Bug Reporter: ConfX ## What happened The `UpgradeTestBase.java` may through null pointer exception when creating the version upgrade pairs in `upgradesTo()` method. The problem happens in the for loop shown below. The `upgradesTo()` calls `versions.getLatest(Semver version)` method to create the `Version` class. {code:java} for (Semver start : vertices.subSet(lowerBound, true, to, false)) { // only include pairs that are allowed, and start or end on CURRENT if (SUPPORTED_UPGRADE_PATHS.hasEdge(start, to) && edgeTouchesTarget(start, to, CURRENT)) upgrade.add(new TestVersions(versions.getLatest(start), Collections.singletonList(versions.getLatest(to; } {code} However, in the `Version.java`, `getLatest()` function never checks whether the `first(version)` is in the `versions` map or not. When the version is not there, a null pointer exception will be thrown and crashes all the upgrade tests. {code:java} public Version getLatest(Semver version) { return versions.get(first(version)) .stream() .findFirst() .orElseThrow(() -> new RuntimeException("No " + version + " versions found")); } {code} ## How to reproduce To reproduce this bug, I'm running Cassandra with commit SHA `310d790ce4734727f943225eb951ab0d889c0a5b`; and dtest API with `dtest-api-0.0.16.jar`. The versions I put under `build/` directory are: dtest-4.0.9.jar, dtest-4.0.13.jar, dtest-4.1.4.jar, and dtest-5.1.jar. The command I'm running is: {code:java} $ ant test-jvm-dtest-some -Duse.jdk11=true -Dtest.name=org.apache.cassandra.distributed.upgrade.UpgradeTest {code} The error message I got is: {code:java} [junit-timeout] INFO [main] 2024-04-08 17:34:23,936 Versions.java:136 - Looking for dtest jars in /Users/xxx/Documents/xxx/cassandra/build [junit-timeout] Found 4.0.13, 4.0.9 [junit-timeout] Found 4.1.4 [junit-timeout] Found 5.1 [junit-timeout] - --- [junit-timeout] Testcase: simpleUpgradeWithNetworkAndGossipTest(org.apache.cassandra.distributed.upgrade.UpgradeTest)-_jdk11: Caused an ERROR [junit-timeout] null [junit-timeout] java.lang.NullPointerException [junit-timeout] at org.apache.cassandra.distributed.shared.Versions.getLatest(Versions.java:127) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesTo(UpgradeTestBase.java:218) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTestBase$TestCase.upgradesToCurrentFrom(UpgradeTestBase.java:203) [junit-timeout] at org.apache.cassandra.distributed.upgrade.UpgradeTest.simpleUpgradeWithNetworkAndGossipTest(UpgradeTest.java:37) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit-timeout] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit-timeout] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit-timeout] [junit-timeout] [junit-timeout] Test org.apache.cassandra.distributed.upgrade.UpgradeTest FAILED {code} With some debugging, the version that causing the null pointer is `5.0-alpha1`, but this version is not shown in `build/` directory and should not be tested if I understand correctly. ## How to fix. There are two ways to fix this problem. One is to add a null pointer checker in `UpgradeTestBase#upgradesTo()`, the other approach is to add the null pointer in `Versions#getLatest()`. I would love to provide a PR to fix this issue if you can tell me which fix looks better to you. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19544) Vector search should be able to restrict on clustering keys when filtering isn't required
[ https://issues.apache.org/jira/browse/CASSANDRA-19544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Semb Wever updated CASSANDRA-19544: --- Bug Category: Parent values: Correctness(12982)Level 1 values: API / Semantic Definition(13162) Complexity: Low Hanging Fruit Discovered By: Adhoc Test Fix Version/s: 5.0.x 5.x Severity: Low Status: Open (was: Triage Needed) Patch: https://github.com/apache/cassandra/compare/cassandra-5.0...thelastpickle:cassandra:mck/19544/5.0 > Vector search should be able to restrict on clustering keys when filtering > isn't required > - > > Key: CASSANDRA-19544 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19544 > Project: Cassandra > Issue Type: Bug > Components: Feature/Vector Search >Reporter: Michael Semb Wever >Assignee: Michael Semb Wever >Priority: Normal > Fix For: 5.0.x, 5.x > > > With a table that has {{primary key((a,b),c,d)}} > a restriction on only the partition works, > e.g. {{where a=. and b=. order by . ann of .}} > but a restriction that also includes a forward sequence of clustering keys > (i.e. a clustering key restriction that wouldn't require filtering) does not > currently work. > e.g. {{where a=. and b=. and c=. order by . ann of .}} > It appears that StatementRestriction:321 is a little too greedy. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19544) Vector search should be able to restrict on clustering keys when filtering isn't required
[ https://issues.apache.org/jira/browse/CASSANDRA-19544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Semb Wever updated CASSANDRA-19544: --- Description: With a table that has {{primary key((a,b),c,d)}} a restriction on only the partition works, e.g. {{where a=. and b=. order by . ann of .}} but a restriction that also includes a forward sequence of clustering keys (i.e. a clustering key restriction that wouldn't require filtering) does not currently work. e.g. {{where a=. and b=. and c=. order by . ann of .}} It appears that StatementRestriction:321 is a little too greedy. was: With a table that has {{primary key((a,b),c,d)}} a restriction on only the partition works, e.g. {{where a=. and b=. order by . ann of .}} but a restriction that also includes a forward sequence of clustering keys (i.e. a clustering key restriction that wouldn't require filtering) does not currently work. e.g. {{where a=. and b=. and c=. order by . ann of .}} It appears that StatementRestriction:321 is a little too greedy. > Vector search should be able to restrict on clustering keys when filtering > isn't required > - > > Key: CASSANDRA-19544 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19544 > Project: Cassandra > Issue Type: Bug > Components: Feature/Vector Search >Reporter: Michael Semb Wever >Assignee: Michael Semb Wever >Priority: Normal > > With a table that has {{primary key((a,b),c,d)}} > a restriction on only the partition works, > e.g. {{where a=. and b=. order by . ann of .}} > but a restriction that also includes a forward sequence of clustering keys > (i.e. a clustering key restriction that wouldn't require filtering) does not > currently work. > e.g. {{where a=. and b=. and c=. order by . ann of .}} > It appears that StatementRestriction:321 is a little too greedy. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19544) Vector search should be able to restrict on clustering keys when filtering isn't required
[ https://issues.apache.org/jira/browse/CASSANDRA-19544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Semb Wever updated CASSANDRA-19544: --- Summary: Vector search should be able to restrict on clustering keys when filtering isn't required (was: Vector search can restrict on clustering keys when filtering isn't required) > Vector search should be able to restrict on clustering keys when filtering > isn't required > - > > Key: CASSANDRA-19544 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19544 > Project: Cassandra > Issue Type: Bug > Components: Feature/Vector Search >Reporter: Michael Semb Wever >Assignee: Michael Semb Wever >Priority: Normal > > With a table that has {{primary key((a,b),c,d)}} > a restriction on only the partition works, > e.g. {{where a=. and b=. order by . ann of .}} > but a restriction that also includes a forward sequence of clustering keys > (i.e. a clustering key restriction that wouldn't require filtering) does not > currently work. > e.g. {{where a=. and b=. and c=. order by . ann of .}} > It appears that StatementRestriction:321 is a little too greedy. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19544) Vector search can restrict on clustering keys when filtering isn't required
Michael Semb Wever created CASSANDRA-19544: -- Summary: Vector search can restrict on clustering keys when filtering isn't required Key: CASSANDRA-19544 URL: https://issues.apache.org/jira/browse/CASSANDRA-19544 Project: Cassandra Issue Type: Bug Components: Feature/Vector Search Reporter: Michael Semb Wever Assignee: Michael Semb Wever With a table that has {{primary key((a,b),c,d)}} a restriction on only the partition works, e.g. {{where a=. and b=. order by . ann of .}} but a restriction that also includes a forward sequence of clustering keys (i.e. a clustering key restriction that wouldn't require filtering) does not currently work. e.g. {{where a=. and b=. and c=. order by . ann of .}} It appears that StatementRestriction:321 is a little too greedy. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] CASSANDRA-19457: Object reference in Micrometer metrics prevent GC from reclaiming Session instances [cassandra-java-driver]
absurdfarce commented on PR #1916: URL: https://github.com/apache/cassandra-java-driver/pull/1916#issuecomment-2043691203 Added this note to CASSANDRA-19457, mentioning it here as well. I tested a 3x2 matrix of Micrometer, MicroProfile and the default (Dropwizard) case against stock 4.18.0 and 4.18.1-SNAPSHOT containing this fix. I only observed the leak with Micrometer and MicroProfile against 4.18.0... meaning this PR clearly addressed those cases. I didn't see the leak when using Dropwizard with or without this fix which _sounds like_ what Jane saw as well. But that makes me a bit nervous since the test was failing with Dropwizard before. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19457) Object reference in Micrometer metrics prevent GC from reclaiming Session instances
[ https://issues.apache.org/jira/browse/CASSANDRA-19457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835064#comment-17835064 ] Bret McGuire commented on CASSANDRA-19457: -- I just tested this (using the Repro case above) with the following configurations: MIcrometer, MicroProfile and default metrics X 4.18.0 and 4.18.1-SNAPSHOT containing the fix from [~janesiyaohe] I observed the leak in both the Micrometer and MicroProfile case using 4.18.0. In all other cases the leak wasn't observed. This indicates that Jane's fix appears to address Micrometer and MicroProfile and that the default case did not have an issue. > Object reference in Micrometer metrics prevent GC from reclaiming Session > instances > --- > > Key: CASSANDRA-19457 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19457 > Project: Cassandra > Issue Type: Bug > Components: Client/java-driver >Reporter: Jane He >Assignee: Jane He >Priority: Normal > Attachments: Repro-1.java, Repro.java, Screenshot 2024-03-06 at > 2.07.01 PM.png, Screenshot 2024-03-06 at 2.07.13 PM.png, build-1.gradle, > build.gradle > > Time Spent: 3h 10m > Remaining Estimate: 0h > > There is a memory leak of previous closed {{{}DefaultSession{}}}s. It can be > reproduced by this: > {code:java} > public static void main(String[] args) throws InterruptedException { > Semaphore sema = new Semaphore(20); > for (int i = 0; i < 1; i++) { > new Thread(() -> { > try { > sema.acquire(); > try(CqlSession session = CqlSession.builder() > > .withCloudSecureConnectBundle(Paths.get("bundle.zip")) > .withAuthCredentials("token", "") > .build()) { > // Do stuff > } > } catch (Exception e) { > System.out.println(e); > } finally { > sema.release(); > } > }).start(); > } > }{code} > On initial investigation, it seems like > {{MicrometerMetricUpdater.initializeGauge()}} uses > {{{}Gauge.{}}}{{{}_builder()_{}}} _using_ {{_Supplier_}} _._ This creates a > strong reference that is causing the issue. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19526) Optionally enable TLS in the server and client for Analytics testing
[ https://issues.apache.org/jira/browse/CASSANDRA-19526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francisco Guerrero updated CASSANDRA-19526: --- Fix Version/s: NA Source Control Link: https://github.com/apache/cassandra-analytics/commit/690101840d4d8f9c656bb0ca114f6619af80e1cf Resolution: Fixed Status: Resolved (was: Ready to Commit) > Optionally enable TLS in the server and client for Analytics testing > > > Key: CASSANDRA-19526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19526 > Project: Cassandra > Issue Type: Improvement > Components: Analytics Library >Reporter: Doug Rohrer >Assignee: Francisco Guerrero >Priority: Normal > Fix For: NA > > Time Spent: 0.5h > Remaining Estimate: 0h > > All integration tests today run without SSL, which is generally fine because > they run locally. However, it would be helpful to be able to start up the > sidecar with SSL enabled in the integration test framework so that > third-party tests could connect via secure connections for testing purposes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19526) Optionally enable TLS in the server and client for Analytics testing
[ https://issues.apache.org/jira/browse/CASSANDRA-19526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francisco Guerrero updated CASSANDRA-19526: --- Status: Ready to Commit (was: Review In Progress) > Optionally enable TLS in the server and client for Analytics testing > > > Key: CASSANDRA-19526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19526 > Project: Cassandra > Issue Type: Improvement > Components: Analytics Library >Reporter: Doug Rohrer >Assignee: Francisco Guerrero >Priority: Normal > Time Spent: 20m > Remaining Estimate: 0h > > All integration tests today run without SSL, which is generally fine because > they run locally. However, it would be helpful to be able to start up the > sidecar with SSL enabled in the integration test framework so that > third-party tests could connect via secure connections for testing purposes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19480) [Analytics] Report task level job stats from analytics
[ https://issues.apache.org/jira/browse/CASSANDRA-19480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Ashok updated CASSANDRA-19480: Change Category: Operability Complexity: Normal Status: Open (was: Triage Needed) > [Analytics] Report task level job stats from analytics > -- > > Key: CASSANDRA-19480 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19480 > Project: Cassandra > Issue Type: Task > Components: Analytics Library >Reporter: Arjun Ashok >Assignee: Arjun Ashok >Priority: Normal > > This is an extension of > https://issues.apache.org/jira/browse/CASSANDRA-19418. to instrument spark > task level metrics such as max task retries and task execution time stats > (max, median, p90 across tasks within the job) from the analytics library. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-19480) [Analytics] Report task level job stats from analytics
[ https://issues.apache.org/jira/browse/CASSANDRA-19480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arjun Ashok reassigned CASSANDRA-19480: --- Assignee: Arjun Ashok > [Analytics] Report task level job stats from analytics > -- > > Key: CASSANDRA-19480 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19480 > Project: Cassandra > Issue Type: Task > Components: Analytics Library >Reporter: Arjun Ashok >Assignee: Arjun Ashok >Priority: Normal > > This is an extension of > https://issues.apache.org/jira/browse/CASSANDRA-19418. to instrument spark > task level metrics such as max task retries and task execution time stats > (max, median, p90 across tasks within the job) from the analytics library. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra-analytics) branch trunk updated: CASSANDRA-19526: Optionally enable TLS in the server and client for Analytics testing
This is an automated email from the ASF dual-hosted git repository. frankgh pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra-analytics.git The following commit(s) were added to refs/heads/trunk by this push: new 6901018 CASSANDRA-19526: Optionally enable TLS in the server and client for Analytics testing 6901018 is described below commit 690101840d4d8f9c656bb0ca114f6619af80e1cf Author: Francisco Guerrero AuthorDate: Mon Apr 8 14:33:50 2024 -0700 CASSANDRA-19526: Optionally enable TLS in the server and client for Analytics testing All integration tests today run without TLS, which is generally fine because they run locally. However, it is helpful to be able to start up the sidecar with TLS enabled in the integration test framework so that third-party tests could connect via secure connections for testing purposes. Co-authored-by: Doug Rohrer Co-authored-by: Francisco Guerrero Patch by Doug Rohrer, Francisco Guerrero; Reviewed by Yifan Cai for CASSANDRA-19526 --- .../spark/common/stats/JobStatsPublisher.java | 2 + .../build.gradle | 6 +- .../distributed/impl/CassandraCluster.java | 1 + .../cassandra/sidecar/testing/MtlsTestHelper.java | 146 .../testing/SharedClusterIntegrationTestBase.java | 54 ++- .../testing/utils/tls/CertificateBuilder.java | 236 + .../testing/utils/tls/CertificateBundle.java | 112 ++ cassandra-analytics-integration-tests/build.gradle | 4 + .../cassandra/analytics/BlockedInstancesTest.java | 29 +- .../cassandra/analytics/DataGenerationUtils.java | 50 ++- .../cassandra/analytics/IntegrationTestJob.java| 379 - .../SharedClusterSparkIntegrationTestBase.java | 40 ++- .../analytics/SparkBulkWriterSimpleTest.java | 118 ++- .../apache/cassandra/analytics/SparkTestUtils.java | 23 +- 14 files changed, 710 insertions(+), 490 deletions(-) diff --git a/cassandra-analytics-core/src/main/java/org/apache/cassandra/spark/common/stats/JobStatsPublisher.java b/cassandra-analytics-core/src/main/java/org/apache/cassandra/spark/common/stats/JobStatsPublisher.java index 28643e8..9027ce4 100644 --- a/cassandra-analytics-core/src/main/java/org/apache/cassandra/spark/common/stats/JobStatsPublisher.java +++ b/cassandra-analytics-core/src/main/java/org/apache/cassandra/spark/common/stats/JobStatsPublisher.java @@ -30,6 +30,8 @@ public interface JobStatsPublisher { /** * Publish the job attributes to be persisted and summarized + * + * @param stats the stats to publish */ void publish(Map stats); } diff --git a/cassandra-analytics-integration-framework/build.gradle b/cassandra-analytics-integration-framework/build.gradle index aeba617..63d66c7 100644 --- a/cassandra-analytics-integration-framework/build.gradle +++ b/cassandra-analytics-integration-framework/build.gradle @@ -75,7 +75,11 @@ dependencies { exclude group: 'junit', module: 'junit' } implementation("io.vertx:vertx-web-client:${project.vertxVersion}") -implementation group: 'com.fasterxml.jackson.core', name: 'jackson-annotations', version: '2.14.3' +implementation(group: 'com.fasterxml.jackson.core', name: 'jackson-annotations', version: '2.14.3') + +// Bouncycastle dependencies for test certificate provisioning +implementation(group: 'org.bouncycastle', name: 'bcprov-jdk18on', version: '1.78') +implementation(group: 'org.bouncycastle', name: 'bcpkix-jdk18on', version: '1.78') testImplementation(platform("org.junit:junit-bom:${project.junitVersion}")) testImplementation('org.junit.jupiter:junit-jupiter') diff --git a/cassandra-analytics-integration-framework/src/main/java/org/apache/cassandra/distributed/impl/CassandraCluster.java b/cassandra-analytics-integration-framework/src/main/java/org/apache/cassandra/distributed/impl/CassandraCluster.java index 4d20c62..f5a5abd 100644 --- a/cassandra-analytics-integration-framework/src/main/java/org/apache/cassandra/distributed/impl/CassandraCluster.java +++ b/cassandra-analytics-integration-framework/src/main/java/org/apache/cassandra/distributed/impl/CassandraCluster.java @@ -68,6 +68,7 @@ public class CassandraCluster implements IClusterExtension< // java.lang.IllegalStateException: Can't load . Instance class loader is already closed. return className.equals("org.apache.cassandra.utils.concurrent.Ref$OnLeak") || className.startsWith("org.apache.cassandra.metrics.RestorableMeter") + || className.equals("org.apache.logging.slf4j.EventDataConverter") || (className.startsWith("org.apache.cassandra.analytics.") && className.contains("BBHelper")); }; diff --git a/cassandra-analytics-integration-framework/src/main/java/org/apache/cassandra/sidecar/testing/MtlsTestHelper.java b/cassandra
Re: [PR] CASSANDRA-19526: Optionally enable TLS in the server and client for A… [cassandra-analytics]
frankgh merged PR #52: URL: https://github.com/apache/cassandra-analytics/pull/52 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19526) Optionally enable TLS in the server and client for Analytics testing
[ https://issues.apache.org/jira/browse/CASSANDRA-19526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yifan Cai updated CASSANDRA-19526: -- Reviewers: Yifan Cai Status: Review In Progress (was: Patch Available) +1 > Optionally enable TLS in the server and client for Analytics testing > > > Key: CASSANDRA-19526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19526 > Project: Cassandra > Issue Type: Improvement > Components: Analytics Library >Reporter: Doug Rohrer >Assignee: Francisco Guerrero >Priority: Normal > Time Spent: 20m > Remaining Estimate: 0h > > All integration tests today run without SSL, which is generally fine because > they run locally. However, it would be helpful to be able to start up the > sidecar with SSL enabled in the integration test framework so that > third-party tests could connect via secure connections for testing purposes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] CASSANDRA-19526: Optionally enable TLS in the server and client for A… [cassandra-analytics]
yifan-c commented on code in PR #52: URL: https://github.com/apache/cassandra-analytics/pull/52#discussion_r1556417502 ## cassandra-analytics-integration-tests/build.gradle: ## @@ -44,6 +44,9 @@ println("Using ${integrationMaxHeapSize} maxHeapSize") def integrationMaxParallelForks = (System.getenv("INTEGRATION_MAX_PARALLEL_FORKS") ?: "4") as int println("Using ${integrationMaxParallelForks} maxParallelForks") +def integrationEnableMtls = (System.getenv("INTEGRATION_ENABLE_MTLS") ?: "true") as boolean Review Comment: nit: `INTEGRATION_MTLS_ENABLED` for boolean -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19526) Optionally enable TLS in the server and client for Analytics testing
[ https://issues.apache.org/jira/browse/CASSANDRA-19526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835041#comment-17835041 ] Francisco Guerrero commented on CASSANDRA-19526: Updated CI: https://app.circleci.com/pipelines/github/frankgh/cassandra-analytics/175/workflows/71481543-e81b-4a35-b635-33e3e7b71714 > Optionally enable TLS in the server and client for Analytics testing > > > Key: CASSANDRA-19526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19526 > Project: Cassandra > Issue Type: Improvement > Components: Analytics Library >Reporter: Doug Rohrer >Assignee: Francisco Guerrero >Priority: Normal > Time Spent: 10m > Remaining Estimate: 0h > > All integration tests today run without SSL, which is generally fine because > they run locally. However, it would be helpful to be able to start up the > sidecar with SSL enabled in the integration test framework so that > third-party tests could connect via secure connections for testing purposes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra-sidecar) 01/01: feedback
This is an automated email from the ASF dual-hosted git repository. ycai pushed a commit to branch r/apache/pr109 in repository https://gitbox.apache.org/repos/asf/cassandra-sidecar.git commit 60d78f2651d0e735c36ec48c343707acd3384f36 Author: Yifan Cai AuthorDate: Mon Apr 8 13:43:05 2024 -0700 feedback --- .../sidecar/metrics/FilteringMetricRegistry.java | 81 +-- .../sidecar/metrics/MetricRegistryFactory.java | 149 + .../sidecar/metrics/MetricRegistryProvider.java| 83 .../cassandra/sidecar/server/MainModule.java | 16 +-- .../testing/CassandraSidecarTestContext.java | 10 +- .../metrics/FilteringMetricRegistryTest.java | 62 - .../cassandra/sidecar/snapshots/SnapshotUtils.java | 12 +- 7 files changed, 232 insertions(+), 181 deletions(-) diff --git a/src/main/java/org/apache/cassandra/sidecar/metrics/FilteringMetricRegistry.java b/src/main/java/org/apache/cassandra/sidecar/metrics/FilteringMetricRegistry.java index f43efcaf..9ea05ee5 100644 --- a/src/main/java/org/apache/cassandra/sidecar/metrics/FilteringMetricRegistry.java +++ b/src/main/java/org/apache/cassandra/sidecar/metrics/FilteringMetricRegistry.java @@ -18,8 +18,11 @@ package org.apache.cassandra.sidecar.metrics; -import java.util.ArrayList; -import java.util.List; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ConcurrentMap; +import java.util.function.Predicate; import com.codahale.metrics.Counter; import com.codahale.metrics.Gauge; @@ -36,55 +39,28 @@ import com.codahale.metrics.Timer; public class FilteringMetricRegistry extends MetricRegistry { private static final NoopMetricRegistry NO_OP_METRIC_REGISTRY = new NoopMetricRegistry(); // supplies no-op metrics -private final List include = new ArrayList<>(); -private final List exclude = new ArrayList<>(); +private final Predicate isAllowed; +// all metrics including the allowed and disallowed +private final ConcurrentMap allMetrics = new ConcurrentHashMap<>(); -public FilteringMetricRegistry(List includeFilters, List excludeFilters) +public FilteringMetricRegistry(Predicate isAllowed) { -this.include.addAll(includeFilters); -this.exclude.addAll(excludeFilters); -} - -public synchronized void configureFilters(List include, List exclude) -{ -this.include.clear(); -this.include.addAll(include); -this.exclude.clear(); -this.exclude.addAll(exclude); -} - -public synchronized void resetFilters() -{ -include.clear(); -exclude.clear(); -} - -/** - * Check if the metric is allowed to register. . - * @param name metric name - * @return true if allowed; false otherwise - */ -public boolean isAllowed(String name) -{ -boolean included = include.stream().anyMatch(filter -> filter.matches(name)); -boolean excluded = exclude.stream().anyMatch(filter -> filter.matches(name)); -return included && !excluded; +this.isAllowed = isAllowed; } @Override public Counter counter(String name) { -if (isAllowed(name)) -{ -return super.counter(name); -} -return NO_OP_METRIC_REGISTRY.counter(name); +// TODO: populate allMetrics in the other methods; it needs to be populated in order to let vertx internal that the metric has been registered and to avoid registration loop +Counter counter = isAllowed.test(name) ? super.counter(name) : NO_OP_METRIC_REGISTRY.counter(name); +allMetrics.putIfAbsent(name, counter); +return counter; } @Override public Counter counter(String name, MetricSupplier supplier) { -if (isAllowed(name)) +if (isAllowed.test(name)) { return super.counter(name, supplier); } @@ -94,7 +70,7 @@ public class FilteringMetricRegistry extends MetricRegistry @Override public Histogram histogram(String name) { -if (isAllowed(name)) +if (isAllowed.test(name)) { return super.histogram(name); } @@ -104,7 +80,7 @@ public class FilteringMetricRegistry extends MetricRegistry @Override public Histogram histogram(String name, MetricSupplier supplier) { -if (isAllowed(name)) +if (isAllowed.test(name)) { return super.histogram(name, supplier); } @@ -114,7 +90,7 @@ public class FilteringMetricRegistry extends MetricRegistry @Override public Meter meter(String name) { -if (isAllowed(name)) +if (isAllowed.test(name)) { return super.meter(name); } @@ -124,7 +100,7 @@ public class FilteringMetricRegistry extends MetricRegistry @Override public Meter meter(String name, MetricSupplier suppli
(cassandra-sidecar) branch r/apache/pr109 created (now 60d78f26)
This is an automated email from the ASF dual-hosted git repository. ycai pushed a change to branch r/apache/pr109 in repository https://gitbox.apache.org/repos/asf/cassandra-sidecar.git at 60d78f26 feedback This branch includes the following new commits: new 60d78f26 feedback The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19525) Optionally avoid hint transfer during decommission - port from 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835040#comment-17835040 ] Caleb Rackliffe commented on CASSANDRA-19525: - [~paulchandler] Thanks for the patches! > Optionally avoid hint transfer during decommission - port from 5.0 > -- > > Key: CASSANDRA-19525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19525 > Project: Cassandra > Issue Type: Improvement > Components: Consistency/Hints >Reporter: Paul Chandler >Assignee: Paul Chandler >Priority: Normal > Fix For: 4.0.x, 4.1.x > > Attachments: CASSANDRA-19525_4.0.patch, CASSANDRA-19525_4.1.patch, > ci_summary.html > > > This ticket is to port the changes already made for > https://issues.apache.org/jira/browse/CASSANDRA-17808 to 4.0 and 4.1 > This will allow the option to turn off the transferring of hints during > decommission (specifically unbootstrap) > This also allows the hints to be transferred at a higher rate during > decommission, as the hinted_handoff_throttle is not divided by the number of > nodes in the cluster for the unbootstrap process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19525) Optionally avoid hint transfer during decommission - port from 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caleb Rackliffe updated CASSANDRA-19525: Fix Version/s: 4.0.13 4.1.5 (was: 4.0.x) (was: 4.1.x) > Optionally avoid hint transfer during decommission - port from 5.0 > -- > > Key: CASSANDRA-19525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19525 > Project: Cassandra > Issue Type: Improvement > Components: Consistency/Hints >Reporter: Paul Chandler >Assignee: Paul Chandler >Priority: Normal > Fix For: 4.0.13, 4.1.5 > > Attachments: CASSANDRA-19525_4.0.patch, CASSANDRA-19525_4.1.patch, > ci_summary.html > > > This ticket is to port the changes already made for > https://issues.apache.org/jira/browse/CASSANDRA-17808 to 4.0 and 4.1 > This will allow the option to turn off the transferring of hints during > decommission (specifically unbootstrap) > This also allows the hints to be transferred at a higher rate during > decommission, as the hinted_handoff_throttle is not divided by the number of > nodes in the cluster for the unbootstrap process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19525) Optionally avoid hint transfer during decommission - port from 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caleb Rackliffe updated CASSANDRA-19525: Source Control Link: https://github.com/apache/cassandra/commit/0974a3656dd4fd98b527264a763b50980f49be24 Resolution: Fixed Status: Resolved (was: Ready to Commit) > Optionally avoid hint transfer during decommission - port from 5.0 > -- > > Key: CASSANDRA-19525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19525 > Project: Cassandra > Issue Type: Improvement > Components: Consistency/Hints >Reporter: Paul Chandler >Assignee: Paul Chandler >Priority: Normal > Fix For: 4.0.x, 4.1.x > > Attachments: CASSANDRA-19525_4.0.patch, CASSANDRA-19525_4.1.patch, > ci_summary.html > > > This ticket is to port the changes already made for > https://issues.apache.org/jira/browse/CASSANDRA-17808 to 4.0 and 4.1 > This will allow the option to turn off the transferring of hints during > decommission (specifically unbootstrap) > This also allows the hints to be transferred at a higher rate during > decommission, as the hinted_handoff_throttle is not divided by the number of > nodes in the cluster for the unbootstrap process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) branch cassandra-5.0 updated (9752ceb439 -> 7c29439cef)
This is an automated email from the ASF dual-hosted git repository. maedhroz pushed a change to branch cassandra-5.0 in repository https://gitbox.apache.org/repos/asf/cassandra.git from 9752ceb439 Merge branch 'cassandra-4.1' into cassandra-5.0 new 0974a3656d Optionally avoid hint transfer during decommission new 39bd3c2261 Merge branch 'cassandra-4.0' into cassandra-4.1 new 7c29439cef Merge branch 'cassandra-4.1' into cassandra-5.0 The 3 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) 01/01: Merge branch 'cassandra-5.0' into trunk
This is an automated email from the ASF dual-hosted git repository. maedhroz pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra.git commit cddfd7f17a3e676ab35a1dbdaacb66377cd2556b Merge: 7623e4678b 7c29439cef Author: Caleb Rackliffe AuthorDate: Mon Apr 8 15:30:29 2024 -0500 Merge branch 'cassandra-5.0' into trunk * cassandra-5.0: Optionally avoid hint transfer during decommission - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835036#comment-17835036 ] Brandon Williams commented on CASSANDRA-19448: -- I [removed|https://github.com/driftx/cassandra/commit/a060134cb36d2be2d9c7529e92e7a5acb1400ca5] the Map.of usage and here is [4.0 j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1572/workflows/899cfaf5-0f5d-47dd-82e0-f7531dfb1abc] and [4.1 j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1577/workflows/4a17a2f1-c20d-4f5a-84f3-d26cec580fe5/jobs/83623], but it turns out the commitlog archiver test is flaky. > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) 01/01: Merge branch 'cassandra-4.0' into cassandra-4.1
This is an automated email from the ASF dual-hosted git repository. maedhroz pushed a commit to branch cassandra-4.1 in repository https://gitbox.apache.org/repos/asf/cassandra.git commit 39bd3c22612e99239f91f881558329251a4c4e98 Merge: 5fb562d7ef 0974a3656d Author: Caleb Rackliffe AuthorDate: Mon Apr 8 15:27:12 2024 -0500 Merge branch 'cassandra-4.0' into cassandra-4.1 * cassandra-4.0: Optionally avoid hint transfer during decommission CHANGES.txt| 1 + conf/cassandra.yaml| 5 +++ src/java/org/apache/cassandra/config/Config.java | 1 + .../cassandra/config/DatabaseDescriptor.java | 9 + .../cassandra/hints/HintsDispatchExecutor.java | 21 ++ .../apache/cassandra/service/StorageService.java | 28 -- .../cassandra/service/StorageServiceMBean.java | 4 ++ .../test/HintedHandoffAddRemoveNodesTest.java | 45 +- 8 files changed, 103 insertions(+), 11 deletions(-) diff --cc CHANGES.txt index de36d88adf,b71ca9254f..8f718e4e89 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,8 -1,5 +1,9 @@@ -4.0.13 +4.1.5 + * Fix hints delivery for a node going down repeatedly (CASSANDRA-19495) + * Do not go to disk for reading hints file sizes (CASSANDRA-19477) + * Fix system_views.settings to handle array types (CASSANDRA-19475) +Merged from 4.0: + * Optionally avoid hint transfer during decommission (CASSANDRA-19525) * Change logging to TRACE when failing to get peer certificate (CASSANDRA-19508) * Push LocalSessions info logs to debug (CASSANDRA-18335) * Filter remote DC replicas out when constructing the initial replica plan for the local read repair (CASSANDRA-19120) diff --cc conf/cassandra.yaml index 1986d6fa29,b5e6af8767..be1439c80e --- a/conf/cassandra.yaml +++ b/conf/cassandra.yaml @@@ -83,22 -81,16 +83,27 @@@ max_hints_delivery_threads: # How often hints should be flushed from the internal buffers to disk. # Will *not* trigger fsync. -hints_flush_period_in_ms: 1 +# Min unit: ms +hints_flush_period: 1ms -# Maximum size for a single hints file, in megabytes. -max_hints_file_size_in_mb: 128 +# Maximum size for a single hints file, in mebibytes. +# Min unit: MiB +max_hints_file_size: 128MiB + +# The file size limit to store hints for an unreachable host, in mebibytes. +# Once the local hints files have reached the limit, no more new hints will be created. +# Set a non-positive value will disable the size limit. +# max_hints_size_per_host: 0MiB + +# Enable / disable automatic cleanup for the expired and orphaned hints file. +# Disable the option in order to preserve those hints on the disk. +auto_hints_cleanup_enabled: false + # Enable/disable transfering hints to a peer during decommission. Even when enabled, this does not guarantee + # consistency for logged batches, and it may delay decommission when coupled with a strict hinted_handoff_throttle. + # Default: true -#transfer_hints_on_decommission: true ++# transfer_hints_on_decommission: true + # Compression to apply to the hint files. If omitted, hints files # will be written uncompressed. LZ4, Snappy, and Deflate compressors # are supported. diff --cc src/java/org/apache/cassandra/config/Config.java index 298903acf3,dc17639f98..3a62aae7c7 --- a/src/java/org/apache/cassandra/config/Config.java +++ b/src/java/org/apache/cassandra/config/Config.java @@@ -409,19 -290,13 +409,20 @@@ public class Confi public InternodeCompression internode_compression = InternodeCompression.none; -public int hinted_handoff_throttle_in_kb = 1024; -public int batchlog_replay_throttle_in_kb = 1024; +@Replaces(oldName = "hinted_handoff_throttle_in_kb", converter = Converters.KIBIBYTES_DATASTORAGE, deprecated = true) +public DataStorageSpec.IntKibibytesBound hinted_handoff_throttle = new DataStorageSpec.IntKibibytesBound("1024KiB"); +@Replaces(oldName = "batchlog_replay_throttle_in_kb", converter = Converters.KIBIBYTES_DATASTORAGE, deprecated = true) +public DataStorageSpec.IntKibibytesBound batchlog_replay_throttle = new DataStorageSpec.IntKibibytesBound("1024KiB"); public int max_hints_delivery_threads = 2; -public int hints_flush_period_in_ms = 1; -public int max_hints_file_size_in_mb = 128; +@Replaces(oldName = "hints_flush_period_in_ms", converter = Converters.MILLIS_DURATION_INT, deprecated = true) +public DurationSpec.IntMillisecondsBound hints_flush_period = new DurationSpec.IntMillisecondsBound("10s"); +@Replaces(oldName = "max_hints_file_size_in_mb", converter = Converters.MEBIBYTES_DATA_STORAGE_INT, deprecated = true) +public DataStorageSpec.IntMebibytesBound max_hints_file_size = new DataStorageSpec.IntMebibytesBound("128MiB"); +public volatile DataStorageSpec.LongBytesBound max_hints_size_per_host = new DataStorageSpec.LongBytesBound("0B"); // 0 means disabled
(cassandra) branch cassandra-4.1 updated (5fb562d7ef -> 39bd3c2261)
This is an automated email from the ASF dual-hosted git repository. maedhroz pushed a change to branch cassandra-4.1 in repository https://gitbox.apache.org/repos/asf/cassandra.git from 5fb562d7ef Fix hints delivery for a node going down repeatedly new 0974a3656d Optionally avoid hint transfer during decommission new 39bd3c2261 Merge branch 'cassandra-4.0' into cassandra-4.1 The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: CHANGES.txt| 1 + conf/cassandra.yaml| 5 +++ src/java/org/apache/cassandra/config/Config.java | 1 + .../cassandra/config/DatabaseDescriptor.java | 9 + .../cassandra/hints/HintsDispatchExecutor.java | 21 ++ .../apache/cassandra/service/StorageService.java | 28 -- .../cassandra/service/StorageServiceMBean.java | 4 ++ .../test/HintedHandoffAddRemoveNodesTest.java | 45 +- 8 files changed, 103 insertions(+), 11 deletions(-) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) 01/01: Merge branch 'cassandra-4.1' into cassandra-5.0
This is an automated email from the ASF dual-hosted git repository. maedhroz pushed a commit to branch cassandra-5.0 in repository https://gitbox.apache.org/repos/asf/cassandra.git commit 7c29439cef3d3649b8e75bd0e716feed832b1999 Merge: 9752ceb439 39bd3c2261 Author: Caleb Rackliffe AuthorDate: Mon Apr 8 15:29:17 2024 -0500 Merge branch 'cassandra-4.1' into cassandra-5.0 * cassandra-4.1: Optionally avoid hint transfer during decommission - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) branch trunk updated (7623e4678b -> cddfd7f17a)
This is an automated email from the ASF dual-hosted git repository. maedhroz pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra.git from 7623e4678b The result of applying a metadata snapshot via ForceSnapshot should return the correct set of modified keys new 0974a3656d Optionally avoid hint transfer during decommission new 39bd3c2261 Merge branch 'cassandra-4.0' into cassandra-4.1 new 7c29439cef Merge branch 'cassandra-4.1' into cassandra-5.0 new cddfd7f17a Merge branch 'cassandra-5.0' into trunk The 4 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) branch cassandra-4.0 updated: Optionally avoid hint transfer during decommission
This is an automated email from the ASF dual-hosted git repository. maedhroz pushed a commit to branch cassandra-4.0 in repository https://gitbox.apache.org/repos/asf/cassandra.git The following commit(s) were added to refs/heads/cassandra-4.0 by this push: new 0974a3656d Optionally avoid hint transfer during decommission 0974a3656d is described below commit 0974a3656dd4fd98b527264a763b50980f49be24 Author: Caleb Rackliffe AuthorDate: Fri Apr 5 15:26:39 2024 -0500 Optionally avoid hint transfer during decommission patch by Paul Chandler; reviewed by Caleb Rackliffe and Brandon Williams for CASSANDRA-19525 --- CHANGES.txt| 1 + conf/cassandra.yaml| 5 +++ src/java/org/apache/cassandra/config/Config.java | 1 + .../cassandra/config/DatabaseDescriptor.java | 10 + .../cassandra/hints/HintsDispatchExecutor.java | 20 ++ .../apache/cassandra/service/StorageService.java | 29 +-- .../cassandra/service/StorageServiceMBean.java | 3 ++ .../test/HintedHandoffAddRemoveNodesTest.java | 43 ++ 8 files changed, 102 insertions(+), 10 deletions(-) diff --git a/CHANGES.txt b/CHANGES.txt index 20f4fd47ea..b71ca9254f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 4.0.13 + * Optionally avoid hint transfer during decommission (CASSANDRA-19525) * Change logging to TRACE when failing to get peer certificate (CASSANDRA-19508) * Push LocalSessions info logs to debug (CASSANDRA-18335) * Filter remote DC replicas out when constructing the initial replica plan for the local read repair (CASSANDRA-19120) diff --git a/conf/cassandra.yaml b/conf/cassandra.yaml index 7f162749d2..b5e6af8767 100644 --- a/conf/cassandra.yaml +++ b/conf/cassandra.yaml @@ -86,6 +86,11 @@ hints_flush_period_in_ms: 1 # Maximum size for a single hints file, in megabytes. max_hints_file_size_in_mb: 128 +# Enable/disable transfering hints to a peer during decommission. Even when enabled, this does not guarantee +# consistency for logged batches, and it may delay decommission when coupled with a strict hinted_handoff_throttle. +# Default: true +#transfer_hints_on_decommission: true + # Compression to apply to the hint files. If omitted, hints files # will be written uncompressed. LZ4, Snappy, and Deflate compressors # are supported. diff --git a/src/java/org/apache/cassandra/config/Config.java b/src/java/org/apache/cassandra/config/Config.java index d7517124df..dc17639f98 100644 --- a/src/java/org/apache/cassandra/config/Config.java +++ b/src/java/org/apache/cassandra/config/Config.java @@ -296,6 +296,7 @@ public class Config public int hints_flush_period_in_ms = 1; public int max_hints_file_size_in_mb = 128; public ParameterizedClass hints_compression; +public volatile boolean transfer_hints_on_decommission = true; public volatile boolean incremental_backups = false; public boolean trickle_fsync = false; diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 377f67117d..561fc24116 100644 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@ -2596,6 +2596,16 @@ public class DatabaseDescriptor conf.hints_compression = parameterizedClass; } +public static boolean getTransferHintsOnDecommission() +{ +return conf.transfer_hints_on_decommission; +} + +public static void setTransferHintsOnDecommission(boolean enabled) +{ +conf.transfer_hints_on_decommission = enabled; +} + public static boolean isIncrementalBackupsEnabled() { return conf.incremental_backups; diff --git a/src/java/org/apache/cassandra/hints/HintsDispatchExecutor.java b/src/java/org/apache/cassandra/hints/HintsDispatchExecutor.java index b5eb0b1fac..54e13f428b 100644 --- a/src/java/org/apache/cassandra/hints/HintsDispatchExecutor.java +++ b/src/java/org/apache/cassandra/hints/HintsDispatchExecutor.java @@ -182,7 +182,7 @@ final class HintsDispatchExecutor private boolean transfer(UUID hostId) { catalog.stores() - .map(store -> new DispatchHintsTask(store, hostId)) + .map(store -> new DispatchHintsTask(store, hostId, true)) .forEach(Runnable::run); return !catalog.hasFiles(); @@ -195,21 +195,27 @@ final class HintsDispatchExecutor private final UUID hostId; private final RateLimiter rateLimiter; -DispatchHintsTask(HintsStore store, UUID hostId) +DispatchHintsTask(HintsStore store, UUID hostId, boolean isTransfer) { this.store = store; this.hostId = hostId; -// rate limit is in bytes per second. Uses Double.MAX_VALUE if disabled (set to 0 in cas
[jira] [Updated] (CASSANDRA-19526) Optionally enable TLS in the server and client for Analytics testing
[ https://issues.apache.org/jira/browse/CASSANDRA-19526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francisco Guerrero updated CASSANDRA-19526: --- Authors: Doug Rohrer, Francisco Guerrero (was: Francisco Guerrero) Test and Documentation Plan: Adds the ability to test the TLS code path Status: Patch Available (was: In Progress) PR: https://github.com/apache/cassandra-analytics/pull/52 CI: https://app.circleci.com/pipelines/github/frankgh/cassandra-analytics/175/workflows/3919b8af-abdc-42c2-bccc-06c4ea1db108 > Optionally enable TLS in the server and client for Analytics testing > > > Key: CASSANDRA-19526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19526 > Project: Cassandra > Issue Type: Improvement > Components: Analytics Library >Reporter: Doug Rohrer >Assignee: Francisco Guerrero >Priority: Normal > Time Spent: 10m > Remaining Estimate: 0h > > All integration tests today run without SSL, which is generally fine because > they run locally. However, it would be helpful to be able to start up the > sidecar with SSL enabled in the integration test framework so that > third-party tests could connect via secure connections for testing purposes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Add note about guardrails on `WHERE … IN … ` [cassandra-java-driver]
michaelsembwever commented on PR #1899: URL: https://github.com/apache/cassandra-java-driver/pull/1899#issuecomment-2043548118 I agree with your concerns. I think the easiest way to address this is to change the language to be informative about server-side restrictions that the user may hit, reference the correct cassandra docs. From the driver's PoV these restrictions are ultimately unknown. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Add note about guardrails on `WHERE … IN … ` [cassandra-java-driver]
absurdfarce commented on PR #1899: URL: https://github.com/apache/cassandra-java-driver/pull/1899#issuecomment-2043504526 I do have a few questions. @michaelsembwever I don't know if you can help with some/all of these... I can follow up other places if need be. 1. This change appears to stem from the work on CEP-3. Does this only apply to OSS C* or does it also apply to DSE and/or Astra as well? 2. If it's just a C* thing is there a version dependency on when users need to worry about this behaviour? It looks like most of the CEP-3 work came in with 4.1... should we limit this advice to specific versions (or describe differing behaviours for impls before and after CEP-3 came along)? 3. These docs reference a limit of 25 values in an IN clause. Is that a fixed limit or is it configurable? If it's configurable we might want to provide guidance to the user about how to configure it. 4. Seems like we might want to provide a pointer to other information about these constraints. This could take the form of a pointer to CEP-3 but if there are other resources that explain this in a user-friendly way we should consider those instead. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-16364) Joining nodes simultaneously with auto_bootstrap:false can cause token collision
[ https://issues.apache.org/jira/browse/CASSANDRA-16364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835024#comment-17835024 ] Jon Haddad commented on CASSANDRA-16364: Just ran into this with the latest 4.0, starting 9 nodes simultaneously, all marked as seeds. Using this many seeds was an oversight, but I thought we randomized the tokens in a way that would prevent this from happening. {noformat} INFO [GossipStage:1] 2024-04-08 18:51:42,228 StorageService.java:2851 - Nodes /172.31.30.76:7000 and /172.31.32.145:7000 have the same token -4365585967229483808. Ignoring /172.31.30.76:7000 INFO [GossipStage:1] 2024-04-08 18:51:42,228 StorageService.java:2851 - Nodes /172.31.30.76:7000 and /172.31.32.145:7000 have the same token 156850771319184154. Ignoring /172.31.30.76:7000 INFO [GossipStage:1] 2024-04-08 18:51:42,228 StorageService.java:2851 - Nodes /172.31.30.76:7000 and /172.31.32.145:7000 have the same token 7039551456192731860. Ignoring /172.31.30.76:7000 INFO [GossipStage:1] 2024-04-08 18:51:42,229 StorageService.java:2851 - Nodes /172.31.30.76:7000 and /172.31.32.145:7000 have the same token 8579899636253633675. Ignoring /172.31.30.76:7000{noformat} > Joining nodes simultaneously with auto_bootstrap:false can cause token > collision > > > Key: CASSANDRA-16364 > URL: https://issues.apache.org/jira/browse/CASSANDRA-16364 > Project: Cassandra > Issue Type: Bug > Components: Cluster/Membership >Reporter: Paulo Motta >Priority: Normal > Fix For: 4.0.x > > > While raising a 6-node ccm cluster to test 4.0-beta4, 2 nodes chosen the same > tokens using the default {{allocate_tokens_for_local_rf}}. However they both > succeeded bootstrap with colliding tokens. > We were familiar with this issue from CASSANDRA-13701 and CASSANDRA-16079, > and the workaround to fix this is to avoid parallel bootstrap when using > {{allocate_tokens_for_local_rf}}. > However, since this is the default behavior, we should try to detect and > prevent this situation when possible, since it can break users relying on > parallel bootstrap behavior. > I think we could prevent this as following: > 1. announce intent to bootstrap via gossip (ie. add node on gossip without > token information) > 2. wait for gossip to settle for a longer period (ie. ring delay) > 3. allocate tokens (if multiple bootstrap attempts are detected, tie break > via node-id) > 4. broadcast tokens and move on with bootstrap -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834997#comment-17834997 ] Brandon Williams edited comment on CASSANDRA-19448 at 4/8/24 7:15 PM: -- I left one small note about a lingering todo that I can remove on commit if you agree. ||Branch||CI|| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1566/workflows/4fae134a-fb02-4b6e-bb97-dd781651b9ef], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1566/workflows/49ac5ce2-0a63-48b5-bc41-da30073dfd73]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1567/workflows/efaf1236-111d-44cd-bee7-f0cffcc99986], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1567/workflows/e9c2a9b3-2516-4b11-b701-5077144afe75]| |[5.0|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-5.0]|[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1569/workflows/601b0002-7536-4343-8499-e30368d4432d], [j17|https://app.circleci.com/pipelines/github/driftx/cassandra/1569/workflows/839908dc-5622-46da-bd04-4152a1305a43]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-trunk]|[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1570/workflows/247ebd77-4fe6-4d08-b95d-6c6e75ef99fa], [j17|https://app.circleci.com/pipelines/github/driftx/cassandra/1570/workflows/628377f0-98d7-4531-baa3-429ca5319f01]| was (Author: brandon.williams): I left one small note about a lingering todo that I can remove on commit if you agree. ||Branch||CI|| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1566/workflows/4fae134a-fb02-4b6e-bb97-dd781651b9ef], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1566/workflows/49ac5ce2-0a63-48b5-bc41-da30073dfd73]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1567/workflows/efaf1236-111d-44cd-bee7-f0cffcc99986], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1567/workflows/e9c2a9b3-2516-4b11-b701-5077144afe75]| |[5.0|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-5.0]|[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1569/workflows/601b0002-7536-4343-8499-e30368d4432d], [j17|https://app.circleci.com/pipelines/github/driftx/cassandra/1569/workflows/839908dc-5622-46da-bd04-4152a1305a43]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-trunk]|[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1568/workflows/52d7f8aa-b2ee-425f-ad8e-a031ed6f7526], [j17|https://app.circleci.com/pipelines/github/driftx/cassandra/1568/workflows/ed32dc18-3c16-4661-aaf8-570fcd48d608]| > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent by Atlassian Jira (v8.20.10#820010) -
Re: [PR] CASSANDRA-19457: Object reference in Micrometer metrics prevent GC from reclaiming Session instances [cassandra-java-driver]
SiyaoIsHiding commented on PR #1916: URL: https://github.com/apache/cassandra-java-driver/pull/1916#issuecomment-2043477828 Thank you @adutra , test added! Pending for review I thought Dropwizard was not leaking. To my surprise, all three failed this test before our fix. They all passed this test after the fix. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[PR] CASSANDRA-19526: Optionally enable TLS in the server and client for A… [cassandra-analytics]
frankgh opened a new pull request, #52: URL: https://github.com/apache/cassandra-analytics/pull/52 …nalytics testing All integration tests today run without TLS, which is generally fine because they run locally. However, it is helpful to be able to start up the sidecar with TLS enabled in the integration test framework so that third-party tests could connect via secure connections for testing purposes. Co-authored-by: Doug Rohrer Co-authored-by: Francisco Guerrero Patch by Doug Rohrer, Francisco Guerrero; Reviewed by TBD for CASSANDRA-19526 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-19526) Optionally enable TLS in the server and client for Analytics testing
[ https://issues.apache.org/jira/browse/CASSANDRA-19526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francisco Guerrero reassigned CASSANDRA-19526: -- Assignee: Francisco Guerrero (was: Doug Rohrer) > Optionally enable TLS in the server and client for Analytics testing > > > Key: CASSANDRA-19526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19526 > Project: Cassandra > Issue Type: Improvement > Components: Analytics Library >Reporter: Doug Rohrer >Assignee: Francisco Guerrero >Priority: Normal > > All integration tests today run without SSL, which is generally fine because > they run locally. However, it would be helpful to be able to start up the > sidecar with SSL enabled in the integration test framework so that > third-party tests could connect via secure connections for testing purposes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19479) Fix type issues and provide tests for type compatibility between 4.1 and 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Miklosovic updated CASSANDRA-19479: -- Fix Version/s: 4.0.x 4.1.x 5.0.x 5.x (was: 5.0) (was: 5.1) (was: 4.1.5) (was: 4.0.13) > Fix type issues and provide tests for type compatibility between 4.1 and 5.0 > > > Key: CASSANDRA-19479 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19479 > Project: Cassandra > Issue Type: Task > Components: Legacy/Core, Test/unit >Reporter: Jacek Lewandowski >Assignee: Jacek Lewandowski >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > > This is a part of CASSANDRA-14476 - we should verify whether the type > compatibility matrix is upgradable from 4.0 and 4.1 to 5.0, and if not, fix > the remaining issues. > The implemented tests verify the following: > - assumed compatibility between primitive types > - equals method symmetricity > - freezing/unfreezing > - value compatibility by using a serializer of one type to deserialize a > value serialized using a serializer of another type > - serialization compatibility by serializing a row with a column of one type > as a column of another type for simple and complex cells (multicell types) > - (comparison) compatibility by comparing serialized values of one type using > a comparator of another type; for multicell types - build rows and compare > cell paths of a complex type using a cell path comparator of another complex > type > - verify whether types that are (value/serialization/comparison) compatible > in a previous release are still compatible with this release > - store the compatibility matrix in a compressed JSON file so that we can > copy it to future releases to assert backward compatibility (similar approach > to LegacySSTableTest) > - verify that type serializers are different for non-compatible type pairs > which use custom comparisons > Additionally: > - the equals method in {{TupleType}} and {{UserType}} was fixed to be > symmetric. Previously, comparing two values gave a different outcome when > inverted. > - fixed a condition in comparison method of {{AbstractCompositeType}} > - ported a fix for composite and dynamic composite types which adds a > distinct serializers for them so that the serializers for those types and for > {{BytesType}} are considered different; similar thing was done for > {{LexicalUUIDType}} to make its serializer different to {{UUIDType}} > serializer (see > https://the-asf.slack.com/archives/CK23JSY2K/p1712060572432959) > - fixed a problem with DCT builder - in 5.0+ the {{DynamicCompositeType}} > generation has a problem with inverse alias-type mapping which makes it > vulnerable to problems when the same type has two different aliases -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) branch trunk updated: The result of applying a metadata snapshot via ForceSnapshot should return the correct set of modified keys
This is an automated email from the ASF dual-hosted git repository. ifesdjeen pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra.git The following commit(s) were added to refs/heads/trunk by this push: new 7623e4678b The result of applying a metadata snapshot via ForceSnapshot should return the correct set of modified keys 7623e4678b is described below commit 7623e4678b8ef131434f1de3522c6425c092dff9 Author: Alex Petrov AuthorDate: Mon Mar 25 10:25:50 2024 +0100 The result of applying a metadata snapshot via ForceSnapshot should return the correct set of modified keys Patch by Alex Petrov; reviewed by Marcus Eriksson for CASSANDRA-19128. --- .../org/apache/cassandra/tcm/MetadataKeys.java | 47 + .../cassandra/tcm/ownership/PlacementDeltas.java | 22 +++ .../tcm/transformations/CustomTransformation.java | 44 + .../tcm/transformations/ForceSnapshot.java | 4 +- .../cassandra/tcm/transformations/PrepareMove.java | 6 + .../test/log/ClusterMetadataTestHelper.java| 3 +- .../distributed/test/log/MetadataKeysTest.java | 220 + .../apache/cassandra/harry/gen/EntropySource.java | 5 +- .../org/apache/cassandra/harry/gen/Generators.java | 42 +++- .../org/apache/cassandra/tcm/log/LocalLogTest.java | 10 +- 10 files changed, 388 insertions(+), 15 deletions(-) diff --git a/src/java/org/apache/cassandra/tcm/MetadataKeys.java b/src/java/org/apache/cassandra/tcm/MetadataKeys.java index fda509186b..8028007815 100644 --- a/src/java/org/apache/cassandra/tcm/MetadataKeys.java +++ b/src/java/org/apache/cassandra/tcm/MetadataKeys.java @@ -18,10 +18,17 @@ package org.apache.cassandra.tcm; +import java.util.HashSet; import java.util.Locale; +import java.util.Map; +import java.util.Set; +import java.util.function.Function; import com.google.common.collect.ImmutableSet; +import org.apache.cassandra.tcm.extensions.ExtensionKey; +import org.apache.cassandra.tcm.extensions.ExtensionValue; + public class MetadataKeys { public static final String CORE_NS = MetadataKeys.class.getPackage().getName().toLowerCase(Locale.ROOT); @@ -52,4 +59,44 @@ public class MetadataKeys return new MetadataKey(b.toString()); } +public static ImmutableSet diffKeys(ClusterMetadata before, ClusterMetadata after) +{ +ImmutableSet.Builder builder = new ImmutableSet.Builder<>(); +diffKeys(before, after, builder); +return builder.build(); +} + +private static void diffKeys(ClusterMetadata before, ClusterMetadata after, ImmutableSet.Builder builder) +{ +checkKey(before, after, builder, cm -> cm.schema, MetadataKeys.SCHEMA); +checkKey(before, after, builder, cm -> cm.directory, MetadataKeys.NODE_DIRECTORY); +checkKey(before, after, builder, cm -> cm.tokenMap, MetadataKeys.TOKEN_MAP); +checkKey(before, after, builder, cm -> cm.placements, MetadataKeys.DATA_PLACEMENTS); +checkKey(before, after, builder, cm -> cm.lockedRanges, MetadataKeys.LOCKED_RANGES); +checkKey(before, after, builder, cm -> cm.inProgressSequences, MetadataKeys.IN_PROGRESS_SEQUENCES); + +Set> added = new HashSet<>(after.extensions.keySet()); +for (Map.Entry, ExtensionValue> entry : before.extensions.entrySet()) +{ +ExtensionKey key = entry.getKey(); +added.remove(key); + +if (after.extensions.containsKey(key)) +checkKey(before, after, builder, cm -> cm.extensions.get(key), key); +else +builder.add(key); +} + +for (ExtensionKey key : added) +builder.add(key); +} + +private static void checkKey(ClusterMetadata before, ClusterMetadata after, ImmutableSet.Builder builder, Function> extract, MetadataKey key) +{ +MetadataValue vBefore = extract.apply(before); +MetadataValue vAfter = extract.apply(after); + +if (!vBefore.equals(vAfter)) +builder.add(key); +} } diff --git a/src/java/org/apache/cassandra/tcm/ownership/PlacementDeltas.java b/src/java/org/apache/cassandra/tcm/ownership/PlacementDeltas.java index 6b5b817984..6ba80a854b 100644 --- a/src/java/org/apache/cassandra/tcm/ownership/PlacementDeltas.java +++ b/src/java/org/apache/cassandra/tcm/ownership/PlacementDeltas.java @@ -77,6 +77,28 @@ public class PlacementDeltas extends ReplicationMap e : map.entrySet()) +{ +if (!e.getValue().reads.removals.isEmpty()) +return false; +if (!e.getValue().reads.additions.isEmpty()) +return false; + +if (!e.getValue().writes.removals.isEmpty()) +return false; +if (!e.getValue().writes.additions.isEmpty()) +return false; +} + +return true; +} + public static PlacementDeltas empty() { return EMPTY; diff
[jira] [Updated] (CASSANDRA-19476) CQL Management API
[ https://issues.apache.org/jira/browse/CASSANDRA-19476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Muzafarov updated CASSANDRA-19476: Component/s: CQL/Interpreter Tool/nodetool > CQL Management API > -- > > Key: CASSANDRA-19476 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19476 > Project: Cassandra > Issue Type: Improvement > Components: CQL/Interpreter, CQL/Syntax, Tool/nodetool >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: High > Labels: CEP-38 > Fix For: 5.x > > > We want to run management commands via CQL. > The goals are: > * To provide a way to run predefined management commands via CQL; > * To provide a mechanism for retrieving command definitions and metadata via > CQL; > * To provide information on all available management commands via virtual > tables; > * To provide a registry that stores all C* commands and their metadata > accordingly; > * To internal instrumentation and a reasonable plan for migrating cluster > management from JMX to CQL, taking into account backward compatibility and > adopted deprecation policies; > The discussion on the ML: > https://lists.apache.org/thread/pow83q92m666nqtwyw4m3b18nnkgj2y8 > The design document: > https://cwiki.apache.org/confluence/display/CASSANDRA/CEP-38%3A+CQL+Management+API -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-19428) Clean up KeyRangeIterator classes
[ https://issues.apache.org/jira/browse/CASSANDRA-19428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835001#comment-17835001 ] Ekaterina Dimitrova edited comment on CASSANDRA-19428 at 4/8/24 4:54 PM: - Opened tickets for the rest of the test failures and linked them here and in Butler. was (Author: e.dimitrova): Opened tickets for the rest of the test failures and linked them and in Butler. > Clean up KeyRangeIterator classes > - > > Key: CASSANDRA-19428 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19428 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.0-beta2, 5.1-alpha, 5.1 > > Attachments: > Make_sure_the_builders_attach_the_onClose_hook_when_there_is_only_a_single_sub-iterator.patch > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Remove KeyRangeIterator.current and simplify -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19428) Clean up KeyRangeIterator classes
[ https://issues.apache.org/jira/browse/CASSANDRA-19428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835001#comment-17835001 ] Ekaterina Dimitrova commented on CASSANDRA-19428: - Opened tickets for the rest of the test failures and linked them and in Butler. > Clean up KeyRangeIterator classes > - > > Key: CASSANDRA-19428 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19428 > Project: Cassandra > Issue Type: Improvement > Components: Feature/2i Index >Reporter: Ekaterina Dimitrova >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.0-beta2, 5.1-alpha, 5.1 > > Attachments: > Make_sure_the_builders_attach_the_onClose_hook_when_there_is_only_a_single_sub-iterator.patch > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Remove KeyRangeIterator.current and simplify -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834999#comment-17834999 ] Brandon Williams commented on CASSANDRA-19448: -- Map.of unfortunately does not exist until Java 9, so the j8 builds for 4.0 and 4.1 are failing. > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19540) Test Failure: test_sstablelevelreset
[ https://issues.apache.org/jira/browse/CASSANDRA-19540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19540: Fix Version/s: 5.x > Test Failure: test_sstablelevelreset > > > Key: CASSANDRA-19540 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19540 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/python >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > {code:java} > self = > def test_sstablelevelreset(self): > """ > Insert data and call sstablelevelreset on a series of > tables. Confirm level is reset to 0 using its output. > Test a variety of possible errors and ensure response is resonable. > @since 2.1.5 > @jira_ticket CASSANDRA-7614 > """ > cluster = self.cluster > cluster.populate(1).start() > node1 = cluster.nodelist()[0] > > # test by trying to run on nonexistent keyspace > cluster.stop(gently=False) > try: > node1.run_sstablelevelreset("keyspace1", "standard1") > except ToolError as e: > assert re.search("ColumnFamily not found: keyspace1/standard1", > str(e)) > # this should return exit code 1 > assert e.exit_status == 1, "Expected sstablelevelreset to have a > return code of 1 == but instead return code was {}".format( > e.exit_status) > > # now test by generating keyspace but not flushing sstables > cluster.start() > node1.stress(['write', 'n=100', 'no-warmup', '-schema', > 'replication(factor=1)', > '-rate', 'threads=8']) > cluster.stop(gently=False) > > output, error, rc = node1.run_sstablelevelreset("keyspace1", > "standard1") > self._check_stderr_error(error) > assert re.search("Found no sstables, did you give the correct > keyspace", output) > assert rc == 0, str(rc) > > # test by writing small amount of data and flushing (all sstables > should be level 0) > cluster.start() > session = self.patient_cql_connection(node1) > > session.execute( > "ALTER TABLE keyspace1.standard1 with compaction={'class': > 'LeveledCompactionStrategy', 'sstable_size_in_mb':1};") > offline_tools_test.py:64: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/src/cassandra-driver/cassandra/cluster.py:2618: in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state, host, execute_as).result() > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > self = request timeout. See Session.execute[_async](timeout)'}, > last_host=127.0.0.1:9042 coordinator_host=None> > def result(self): > """ > Return the final result or raise an Exception if errors were > encountered. If the final result or error has not been set > yet, this method will block until it is set, or the timeout > set for the request expires. > > Timeout is specified in the Session request execution functions. > If the timeout is exceeded, an :exc:`cassandra.OperationTimedOut` > will be raised. > This is a client-side timeout. For more information > about server-side coordinator timeouts, see > :class:`.policies.RetryPolicy`. > > Example usage:: > > >>> future = session.execute_async("SELECT * FROM mycf") > >>> # do other stuff... > > >>> try: > ... rows = future.result() > ... for row in rows: > ... ... # process results > ... except Exception: > ... log.exception("Operation failed:") > > """ > self._event.wait() > if self._final_result is not _NOT_SET: > return ResultSet(self, self._final_result) > else: > > raise self._final_exception > E cassandra.OperationTimedOut: errors={'127.0.0.1:9042': 'Client > request timeout. See Session.execute[_async](timeout)'}, > last_host=127.0.0.1:9042 > ../env3.8/src/cassandra-driver/cassandra/cluster.py:4894: OperationTimedOut > test_sstableofflinerelevel > FLAKY > offline_tools_test.TestOfflineTools > test_resumable_decommission > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 > The failure looks different in Jenkins: > https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/testReport/junit/dtest-latest.offline_tools_test/TestOfflineTools/Tests_
[jira] [Updated] (CASSANDRA-19543) Test Failure: testConcurrentReadWriteWorkload
[ https://issues.apache.org/jira/browse/CASSANDRA-19543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19543: Fix Version/s: 5.x > Test Failure: testConcurrentReadWriteWorkload > - > > Key: CASSANDRA-19543 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19543 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/java >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > Flaky on trunk, seen in Butler and here: > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58940/tests#failed-test-0 > {code:java} > java.lang.RuntimeException: Interrupting run because of an exception > at > org.apache.cassandra.harry.runner.Runner.mergeAndThrow(Runner.java:395) > at > org.apache.cassandra.harry.runner.Runner$ConcurrentRunner.runInternal(Runner.java:305) > at org.apache.cassandra.harry.runner.Runner.run(Runner.java:77) > at > org.apache.cassandra.fuzz.harry.integration.model.ConcurrentQuiescentCheckerIntegrationTest.testConcurrentReadWriteWorkload(ConcurrentQuiescentCheckerIntegrationTest.java:62) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Caused by: org.apache.cassandra.exceptions.ReadFailureException: Operation > failed - received 0 responses and 1 failures: UNKNOWN from /127.0.0.1:7012 > at > org.apache.cassandra.service.reads.ReadCallback.awaitResults(ReadCallback.java:161) > at > org.apache.cassandra.service.reads.AbstractReadExecutor.awaitResponses(AbstractReadExecutor.java:396) > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:2071) > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1941) > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1818) > at > org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:479) > at > org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:75) > at > org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:32) > at > org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:483) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:540) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:347) > at > org.apache.cassandra.distributed.impl.Coordinator$1.hasNext(Coordinator.java:220) > at > org.apache.cassandra.distributed.api.QueryResults$1.hasNext(QueryResults.java:55) > at > org.apache.cassandra.distributed.api.QueryResults$IteratorQueryResult.hasNext(QueryResults.java:166) > at > org.apache.cassandra.distributed.api.QueryResult$1.hasNext(QueryResult.java:76) > at com.google.common.collect.Iterators.addAll(Iterators.java:365) > at com.google.common.collect.Lists.newArrayList(Lists.java:146) > at com.google.common.collect.Iterators.toArray(Iterators.java:349) > at > org.apache.cassandra.harry.sut.injvm.InJvmSutBase.execute(InJvmSutBase.java:156) > at > org.apache.cassandra.harry.sut.injvm.InJvmSutBase.execute(InJvmSutBase.java:139) > at > org.apache.cassandra.harry.sut.SystemUnderTest.executeIdempotent(SystemUnderTest.java:54) > at > org.apache.cassandra.harry.model.SelectHelper.execute(SelectHelper.java:328) > at > org.apache.cassandra.harry.model.SelectHelper.execute(SelectHelper.java:322) > at > org.apache.cassandra.harry.model.QuiescentChecker.lambda$validate$0(QuiescentChecker.java:72) > at > org.apache.cassandra.harry.model.QuiescentChecker.validate(QuiescentChecker.java:78) > at > org.apache.cassandra.harry.model.QuiescentChecker.validate(QuiescentChecker.java:72) > at > org.apache.cassandra.harry.visitors.RandomPartitionValidator.visit(RandomPartitionValidator.java:56) > at > org.apache.cassandra.harry.runner.Runner$ConcurrentRunner.lambda$runInternal$0(Runner.java:296) > at > org.apache.cassandra.harry.runner.Runner.lambda$wrapInterrupt$3(Runner.java:368) > at > org.apache.cassandra.concurrent.InfiniteLoopExecutor.loop(InfiniteLoopExecutor.java:121) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.base/java.lang.Thread.run(Thread.java:829) >
[jira] [Updated] (CASSANDRA-19543) Test Failure: testConcurrentReadWriteWorkload
[ https://issues.apache.org/jira/browse/CASSANDRA-19543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19543: Bug Category: Parent values: Correctness(12982)Level 1 values: Test Failure(12990) Complexity: Normal Component/s: CI Test/dtest/java Discovered By: User Report Severity: Normal Status: Open (was: Triage Needed) > Test Failure: testConcurrentReadWriteWorkload > - > > Key: CASSANDRA-19543 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19543 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/java >Reporter: Ekaterina Dimitrova >Priority: Normal > > Flaky on trunk, seen in Butler and here: > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58940/tests#failed-test-0 > {code:java} > java.lang.RuntimeException: Interrupting run because of an exception > at > org.apache.cassandra.harry.runner.Runner.mergeAndThrow(Runner.java:395) > at > org.apache.cassandra.harry.runner.Runner$ConcurrentRunner.runInternal(Runner.java:305) > at org.apache.cassandra.harry.runner.Runner.run(Runner.java:77) > at > org.apache.cassandra.fuzz.harry.integration.model.ConcurrentQuiescentCheckerIntegrationTest.testConcurrentReadWriteWorkload(ConcurrentQuiescentCheckerIntegrationTest.java:62) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > Caused by: org.apache.cassandra.exceptions.ReadFailureException: Operation > failed - received 0 responses and 1 failures: UNKNOWN from /127.0.0.1:7012 > at > org.apache.cassandra.service.reads.ReadCallback.awaitResults(ReadCallback.java:161) > at > org.apache.cassandra.service.reads.AbstractReadExecutor.awaitResponses(AbstractReadExecutor.java:396) > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:2071) > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1941) > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1818) > at > org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:479) > at > org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:75) > at > org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:32) > at > org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:483) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:540) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:347) > at > org.apache.cassandra.distributed.impl.Coordinator$1.hasNext(Coordinator.java:220) > at > org.apache.cassandra.distributed.api.QueryResults$1.hasNext(QueryResults.java:55) > at > org.apache.cassandra.distributed.api.QueryResults$IteratorQueryResult.hasNext(QueryResults.java:166) > at > org.apache.cassandra.distributed.api.QueryResult$1.hasNext(QueryResult.java:76) > at com.google.common.collect.Iterators.addAll(Iterators.java:365) > at com.google.common.collect.Lists.newArrayList(Lists.java:146) > at com.google.common.collect.Iterators.toArray(Iterators.java:349) > at > org.apache.cassandra.harry.sut.injvm.InJvmSutBase.execute(InJvmSutBase.java:156) > at > org.apache.cassandra.harry.sut.injvm.InJvmSutBase.execute(InJvmSutBase.java:139) > at > org.apache.cassandra.harry.sut.SystemUnderTest.executeIdempotent(SystemUnderTest.java:54) > at > org.apache.cassandra.harry.model.SelectHelper.execute(SelectHelper.java:328) > at > org.apache.cassandra.harry.model.SelectHelper.execute(SelectHelper.java:322) > at > org.apache.cassandra.harry.model.QuiescentChecker.lambda$validate$0(QuiescentChecker.java:72) > at > org.apache.cassandra.harry.model.QuiescentChecker.validate(QuiescentChecker.java:78) > at > org.apache.cassandra.harry.model.QuiescentChecker.validate(QuiescentChecker.java:72) > at > org.apache.cassandra.harry.visitors.RandomPartitionValidator.visit(RandomPartitionValidator.java:56) > at > org.apache.cassandra.harry.runner.Runner$ConcurrentRunner.lambda$runInternal$0(Runner.java:296) > at > org.apache.cassandra.harry.runner.Runner.lambda$wrapInterrupt$3(Runner.java:368) > at > org.apache.cassandra.concur
[jira] [Updated] (CASSANDRA-19542) Test Failure: test_resumable_decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-19542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19542: Fix Version/s: 5.x > Test Failure: test_resumable_decommission > - > > Key: CASSANDRA-19542 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19542 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/python >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > {code:java} > self = > @since('3.10') > def test_resumable_decommission(self): > """ > @jira_ticket CASSANDRA-12008 > > Test decommission operation is resumable > """ > self.fixture_dtest_setup.ignore_log_patterns = [r'Streaming error > occurred', > r'Error while > decommissioning node', > r'Remote peer > 127.0.0.2 failed stream session', > r'Remote peer > \/?127.0.0.2:7000 failed stream session', > r'peer 127.0.0.2:7000 > is probably down',] > cluster = self.cluster > > cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec': > 1}) > cluster.populate(3, install_byteman=True).start() > node1, node2, node3 = cluster.nodelist() > > session = self.patient_cql_connection(node2) > # reduce system_distributed RF to 2 so we don't require forceful > decommission > session.execute("ALTER KEYSPACE system_distributed WITH REPLICATION = > {'class':'SimpleStrategy', 'replication_factor':'2'};") > create_ks(session, 'ks', 2) > create_cf(session, 'cf', columns={'c1': 'text', 'c2': 'text'}) > insert_c1c2(session, n=1, consistency=ConsistencyLevel.ALL) > > # Execute first rebuild, should fail > with pytest.raises(ToolError): > if cluster.version() >= '4.0': > script = [mk_bman_path('4.0/decommission_failure_inject.btm')] > else: > script = > [mk_bman_path('pre4.0/decommission_failure_inject.btm')] > node2.byteman_submit(script) > node2.nodetool('decommission') > > # Make sure previous ToolError is due to decommission > node2.watch_log_for('Error while decommissioning node') > > # Decommission again > mark = node2.mark_log() > node2.nodetool('decommission') > > # Check decommision is done and we skipped transfereed ranges > node2.watch_log_for('DECOMMISSIONED', from_mark=mark) > node2.grep_log("Skipping transferred range .* of keyspace ks, > endpoint {}".format(node2.address_for_current_version_slashy()), > filename='debug.log') > > # Check data is correctly forwarded to node1 and node3 > cluster.remove(node2) > node3.stop(gently=False) > session = self.patient_exclusive_cql_connection(node1) > session.execute('USE ks') > for i in range(0, 1): > query_c1c2(session, i, ConsistencyLevel.ONE) > node1.stop(gently=False) > node3.start() > session.shutdown() > mark = node3.mark_log() > node3.watch_log_for('Starting listening for CQL clients', > from_mark=mark) > session = self.patient_exclusive_cql_connection(node3) > session.execute('USE ks') > for i in range(0, 1): > > query_c1c2(session, i, ConsistencyLevel.ONE) > topology_test.py:275: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > tools/data.py:43: in query_c1c2 > assertions.assert_length_equal(rows, 1) > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > object_with_length = [], expected_length = 1 > def assert_length_equal(object_with_length, expected_length): > """ > Assert an object has a specific length. > @param object_with_length The object whose length will be checked > @param expected_length The expected length of the object > > Examples: > assert_length_equal(res, nb_counter) > """ > > assert len(object_with_length) == expected_length, \ > "Expected {} to have length {}, but instead is of length {}"\ > .format(object_with_length, expected_length, > len(object_with_length)) > E AssertionError: Expected [] to have length 1, but instead is of > length 0 > tools/assertions.py:267: AssertionError > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9b
[jira] [Updated] (CASSANDRA-19542) Test Failure: test_resumable_decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-19542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19542: Bug Category: Parent values: Correctness(12982)Level 1 values: Test Failure(12990) Complexity: Normal Component/s: CI Test/dtest/python Discovered By: User Report Severity: Normal Status: Open (was: Triage Needed) > Test Failure: test_resumable_decommission > - > > Key: CASSANDRA-19542 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19542 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/python >Reporter: Ekaterina Dimitrova >Priority: Normal > > {code:java} > self = > @since('3.10') > def test_resumable_decommission(self): > """ > @jira_ticket CASSANDRA-12008 > > Test decommission operation is resumable > """ > self.fixture_dtest_setup.ignore_log_patterns = [r'Streaming error > occurred', > r'Error while > decommissioning node', > r'Remote peer > 127.0.0.2 failed stream session', > r'Remote peer > \/?127.0.0.2:7000 failed stream session', > r'peer 127.0.0.2:7000 > is probably down',] > cluster = self.cluster > > cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec': > 1}) > cluster.populate(3, install_byteman=True).start() > node1, node2, node3 = cluster.nodelist() > > session = self.patient_cql_connection(node2) > # reduce system_distributed RF to 2 so we don't require forceful > decommission > session.execute("ALTER KEYSPACE system_distributed WITH REPLICATION = > {'class':'SimpleStrategy', 'replication_factor':'2'};") > create_ks(session, 'ks', 2) > create_cf(session, 'cf', columns={'c1': 'text', 'c2': 'text'}) > insert_c1c2(session, n=1, consistency=ConsistencyLevel.ALL) > > # Execute first rebuild, should fail > with pytest.raises(ToolError): > if cluster.version() >= '4.0': > script = [mk_bman_path('4.0/decommission_failure_inject.btm')] > else: > script = > [mk_bman_path('pre4.0/decommission_failure_inject.btm')] > node2.byteman_submit(script) > node2.nodetool('decommission') > > # Make sure previous ToolError is due to decommission > node2.watch_log_for('Error while decommissioning node') > > # Decommission again > mark = node2.mark_log() > node2.nodetool('decommission') > > # Check decommision is done and we skipped transfereed ranges > node2.watch_log_for('DECOMMISSIONED', from_mark=mark) > node2.grep_log("Skipping transferred range .* of keyspace ks, > endpoint {}".format(node2.address_for_current_version_slashy()), > filename='debug.log') > > # Check data is correctly forwarded to node1 and node3 > cluster.remove(node2) > node3.stop(gently=False) > session = self.patient_exclusive_cql_connection(node1) > session.execute('USE ks') > for i in range(0, 1): > query_c1c2(session, i, ConsistencyLevel.ONE) > node1.stop(gently=False) > node3.start() > session.shutdown() > mark = node3.mark_log() > node3.watch_log_for('Starting listening for CQL clients', > from_mark=mark) > session = self.patient_exclusive_cql_connection(node3) > session.execute('USE ks') > for i in range(0, 1): > > query_c1c2(session, i, ConsistencyLevel.ONE) > topology_test.py:275: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > tools/data.py:43: in query_c1c2 > assertions.assert_length_equal(rows, 1) > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > object_with_length = [], expected_length = 1 > def assert_length_equal(object_with_length, expected_length): > """ > Assert an object has a specific length. > @param object_with_length The object whose length will be checked > @param expected_length The expected length of the object > > Examples: > assert_length_equal(res, nb_counter) > """ > > assert len(object_with_length) == expected_length, \ > "Expected {} to have length {}, but instead is of length {}"\ > .format(object_with_length, expected_length, > len(object_with_length)) > E AssertionErr
[jira] [Created] (CASSANDRA-19543) Test Failure: testConcurrentReadWriteWorkload
Ekaterina Dimitrova created CASSANDRA-19543: --- Summary: Test Failure: testConcurrentReadWriteWorkload Key: CASSANDRA-19543 URL: https://issues.apache.org/jira/browse/CASSANDRA-19543 Project: Cassandra Issue Type: Bug Reporter: Ekaterina Dimitrova Flaky on trunk, seen in Butler and here: https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58940/tests#failed-test-0 {code:java} java.lang.RuntimeException: Interrupting run because of an exception at org.apache.cassandra.harry.runner.Runner.mergeAndThrow(Runner.java:395) at org.apache.cassandra.harry.runner.Runner$ConcurrentRunner.runInternal(Runner.java:305) at org.apache.cassandra.harry.runner.Runner.run(Runner.java:77) at org.apache.cassandra.fuzz.harry.integration.model.ConcurrentQuiescentCheckerIntegrationTest.testConcurrentReadWriteWorkload(ConcurrentQuiescentCheckerIntegrationTest.java:62) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) Caused by: org.apache.cassandra.exceptions.ReadFailureException: Operation failed - received 0 responses and 1 failures: UNKNOWN from /127.0.0.1:7012 at org.apache.cassandra.service.reads.ReadCallback.awaitResults(ReadCallback.java:161) at org.apache.cassandra.service.reads.AbstractReadExecutor.awaitResponses(AbstractReadExecutor.java:396) at org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:2071) at org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1941) at org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1818) at org.apache.cassandra.db.SinglePartitionReadCommand.execute(SinglePartitionReadCommand.java:479) at org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:75) at org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:32) at org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:483) at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:540) at org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:347) at org.apache.cassandra.distributed.impl.Coordinator$1.hasNext(Coordinator.java:220) at org.apache.cassandra.distributed.api.QueryResults$1.hasNext(QueryResults.java:55) at org.apache.cassandra.distributed.api.QueryResults$IteratorQueryResult.hasNext(QueryResults.java:166) at org.apache.cassandra.distributed.api.QueryResult$1.hasNext(QueryResult.java:76) at com.google.common.collect.Iterators.addAll(Iterators.java:365) at com.google.common.collect.Lists.newArrayList(Lists.java:146) at com.google.common.collect.Iterators.toArray(Iterators.java:349) at org.apache.cassandra.harry.sut.injvm.InJvmSutBase.execute(InJvmSutBase.java:156) at org.apache.cassandra.harry.sut.injvm.InJvmSutBase.execute(InJvmSutBase.java:139) at org.apache.cassandra.harry.sut.SystemUnderTest.executeIdempotent(SystemUnderTest.java:54) at org.apache.cassandra.harry.model.SelectHelper.execute(SelectHelper.java:328) at org.apache.cassandra.harry.model.SelectHelper.execute(SelectHelper.java:322) at org.apache.cassandra.harry.model.QuiescentChecker.lambda$validate$0(QuiescentChecker.java:72) at org.apache.cassandra.harry.model.QuiescentChecker.validate(QuiescentChecker.java:78) at org.apache.cassandra.harry.model.QuiescentChecker.validate(QuiescentChecker.java:72) at org.apache.cassandra.harry.visitors.RandomPartitionValidator.visit(RandomPartitionValidator.java:56) at org.apache.cassandra.harry.runner.Runner$ConcurrentRunner.lambda$runInternal$0(Runner.java:296) at org.apache.cassandra.harry.runner.Runner.lambda$wrapInterrupt$3(Runner.java:368) at org.apache.cassandra.concurrent.InfiniteLoopExecutor.loop(InfiniteLoopExecutor.java:121) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:829) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19542) Test Failure: test_resumable_decommission
Ekaterina Dimitrova created CASSANDRA-19542: --- Summary: Test Failure: test_resumable_decommission Key: CASSANDRA-19542 URL: https://issues.apache.org/jira/browse/CASSANDRA-19542 Project: Cassandra Issue Type: Bug Reporter: Ekaterina Dimitrova {code:java} self = @since('3.10') def test_resumable_decommission(self): """ @jira_ticket CASSANDRA-12008 Test decommission operation is resumable """ self.fixture_dtest_setup.ignore_log_patterns = [r'Streaming error occurred', r'Error while decommissioning node', r'Remote peer 127.0.0.2 failed stream session', r'Remote peer \/?127.0.0.2:7000 failed stream session', r'peer 127.0.0.2:7000 is probably down',] cluster = self.cluster cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec': 1}) cluster.populate(3, install_byteman=True).start() node1, node2, node3 = cluster.nodelist() session = self.patient_cql_connection(node2) # reduce system_distributed RF to 2 so we don't require forceful decommission session.execute("ALTER KEYSPACE system_distributed WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':'2'};") create_ks(session, 'ks', 2) create_cf(session, 'cf', columns={'c1': 'text', 'c2': 'text'}) insert_c1c2(session, n=1, consistency=ConsistencyLevel.ALL) # Execute first rebuild, should fail with pytest.raises(ToolError): if cluster.version() >= '4.0': script = [mk_bman_path('4.0/decommission_failure_inject.btm')] else: script = [mk_bman_path('pre4.0/decommission_failure_inject.btm')] node2.byteman_submit(script) node2.nodetool('decommission') # Make sure previous ToolError is due to decommission node2.watch_log_for('Error while decommissioning node') # Decommission again mark = node2.mark_log() node2.nodetool('decommission') # Check decommision is done and we skipped transfereed ranges node2.watch_log_for('DECOMMISSIONED', from_mark=mark) node2.grep_log("Skipping transferred range .* of keyspace ks, endpoint {}".format(node2.address_for_current_version_slashy()), filename='debug.log') # Check data is correctly forwarded to node1 and node3 cluster.remove(node2) node3.stop(gently=False) session = self.patient_exclusive_cql_connection(node1) session.execute('USE ks') for i in range(0, 1): query_c1c2(session, i, ConsistencyLevel.ONE) node1.stop(gently=False) node3.start() session.shutdown() mark = node3.mark_log() node3.watch_log_for('Starting listening for CQL clients', from_mark=mark) session = self.patient_exclusive_cql_connection(node3) session.execute('USE ks') for i in range(0, 1): > query_c1c2(session, i, ConsistencyLevel.ONE) topology_test.py:275: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tools/data.py:43: in query_c1c2 assertions.assert_length_equal(rows, 1) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ object_with_length = [], expected_length = 1 def assert_length_equal(object_with_length, expected_length): """ Assert an object has a specific length. @param object_with_length The object whose length will be checked @param expected_length The expected length of the object Examples: assert_length_equal(res, nb_counter) """ > assert len(object_with_length) == expected_length, \ "Expected {} to have length {}, but instead is of length {}"\ .format(object_with_length, expected_length, len(object_with_length)) E AssertionError: Expected [] to have length 1, but instead is of length 0 tools/assertions.py:267: AssertionError {code} https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19541) Test Failure: test_sstableofflinerelevel
[ https://issues.apache.org/jira/browse/CASSANDRA-19541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19541: Bug Category: Parent values: Correctness(12982) Complexity: Normal Component/s: CI Test/dtest/python Discovered By: User Report Fix Version/s: 5.x Severity: Normal Status: Open (was: Triage Needed) > Test Failure: test_sstableofflinerelevel > - > > Key: CASSANDRA-19541 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19541 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/python >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > {code:java} > self = > def test_sstableofflinerelevel(self): > """ > Generate sstables of varying levels. > Reset sstables to L0 with sstablelevelreset > Run sstableofflinerelevel and ensure tables are promoted correctly > Also test a variety of bad inputs including nonexistent keyspace and > sstables > @since 2.1.5 > @jira_ticket CASSANDRA-8031 > """ > cluster = self.cluster > > cluster.set_configuration_options(values={'compaction_throughput_mb_per_sec': > 0}) > cluster.populate(1).start() > node1 = cluster.nodelist()[0] > > # NOTE - As of now this does not return when it encounters Exception > and causes test to hang, temporarily commented out > # test by trying to run on nonexistent keyspace > # cluster.stop(gently=False) > # output, error, rc = node1.run_sstableofflinerelevel("keyspace1", > "standard1", output=True) > # assert "java.lang.IllegalArgumentException: Unknown > keyspace/columnFamily keyspace1.standard1" in error > # # this should return exit code 1 > # assert rc, 1 == msg=str(rc) > # cluster.start() > > # now test by generating keyspace but not flushing sstables > > node1.stress(['write', 'n=1', 'no-warmup', > '-schema', 'replication(factor=1)', > '-col', 'n=FIXED(10)', 'SIZE=FIXED(1024)', > '-rate', 'threads=8']) > > cluster.stop(gently=False) > try: > output, error, _ = node1.run_sstableofflinerelevel("keyspace1", > "standard1") > except ToolError as e: > assert re.search("No sstables to relevel for > keyspace1.standard1", e.stdout) > assert e.exit_status == 1, str(e.exit_status) > > # test by flushing (sstable should be level 0) > cluster.start() > session = self.patient_cql_connection(node1) > logger.debug("Altering compaction strategy to LCS") > > session.execute( > "ALTER TABLE keyspace1.standard1 with compaction={'class': > 'LeveledCompactionStrategy', 'sstable_size_in_mb':1, 'enabled':'false'};") > offline_tools_test.py:147: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/src/cassandra-driver/cassandra/cluster.py:2618: in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state, host, execute_as).result() > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > self = request timeout. See Session.execute[_async](timeout)'}, > last_host=127.0.0.1:9042 coordinator_host=None> > def result(self): > """ > Return the final result or raise an Exception if errors were > encountered. If the final result or error has not been set > yet, this method will block until it is set, or the timeout > set for the request expires. > > Timeout is specified in the Session request execution functions. > If the timeout is exceeded, an :exc:`cassandra.OperationTimedOut` > will be raised. > This is a client-side timeout. For more information > about server-side coordinator timeouts, see > :class:`.policies.RetryPolicy`. > > Example usage:: > > >>> future = session.execute_async("SELECT * FROM mycf") > >>> # do other stuff... > > >>> try: > ... rows = future.result() > ... for row in rows: > ... ... # process results > ... except Exception: > ... log.exception("Operation failed:") > > """ > self._event.wait() > if self._final_result is not _NOT_SET: > return ResultSet(self, self._final_result) > else: > > raise self._final_exception > E cassandra.OperationTimedOut: errors={'127.0.0.1:9042': 'Client
[jira] [Created] (CASSANDRA-19541) Test Failure: test_sstableofflinerelevel
Ekaterina Dimitrova created CASSANDRA-19541: --- Summary: Test Failure: test_sstableofflinerelevel Key: CASSANDRA-19541 URL: https://issues.apache.org/jira/browse/CASSANDRA-19541 Project: Cassandra Issue Type: Bug Reporter: Ekaterina Dimitrova {code:java} self = def test_sstableofflinerelevel(self): """ Generate sstables of varying levels. Reset sstables to L0 with sstablelevelreset Run sstableofflinerelevel and ensure tables are promoted correctly Also test a variety of bad inputs including nonexistent keyspace and sstables @since 2.1.5 @jira_ticket CASSANDRA-8031 """ cluster = self.cluster cluster.set_configuration_options(values={'compaction_throughput_mb_per_sec': 0}) cluster.populate(1).start() node1 = cluster.nodelist()[0] # NOTE - As of now this does not return when it encounters Exception and causes test to hang, temporarily commented out # test by trying to run on nonexistent keyspace # cluster.stop(gently=False) # output, error, rc = node1.run_sstableofflinerelevel("keyspace1", "standard1", output=True) # assert "java.lang.IllegalArgumentException: Unknown keyspace/columnFamily keyspace1.standard1" in error # # this should return exit code 1 # assert rc, 1 == msg=str(rc) # cluster.start() # now test by generating keyspace but not flushing sstables node1.stress(['write', 'n=1', 'no-warmup', '-schema', 'replication(factor=1)', '-col', 'n=FIXED(10)', 'SIZE=FIXED(1024)', '-rate', 'threads=8']) cluster.stop(gently=False) try: output, error, _ = node1.run_sstableofflinerelevel("keyspace1", "standard1") except ToolError as e: assert re.search("No sstables to relevel for keyspace1.standard1", e.stdout) assert e.exit_status == 1, str(e.exit_status) # test by flushing (sstable should be level 0) cluster.start() session = self.patient_cql_connection(node1) logger.debug("Altering compaction strategy to LCS") > session.execute( "ALTER TABLE keyspace1.standard1 with compaction={'class': 'LeveledCompactionStrategy', 'sstable_size_in_mb':1, 'enabled':'false'};") offline_tools_test.py:147: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../env3.8/src/cassandra-driver/cassandra/cluster.py:2618: in execute return self.execute_async(query, parameters, trace, custom_payload, timeout, execution_profile, paging_state, host, execute_as).result() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def result(self): """ Return the final result or raise an Exception if errors were encountered. If the final result or error has not been set yet, this method will block until it is set, or the timeout set for the request expires. Timeout is specified in the Session request execution functions. If the timeout is exceeded, an :exc:`cassandra.OperationTimedOut` will be raised. This is a client-side timeout. For more information about server-side coordinator timeouts, see :class:`.policies.RetryPolicy`. Example usage:: >>> future = session.execute_async("SELECT * FROM mycf") >>> # do other stuff... >>> try: ... rows = future.result() ... for row in rows: ... ... # process results ... except Exception: ... log.exception("Operation failed:") """ self._event.wait() if self._final_result is not _NOT_SET: return ResultSet(self, self._final_result) else: > raise self._final_exception E cassandra.OperationTimedOut: errors={'127.0.0.1:9042': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042 ../env3.8/src/cassandra-driver/cassandra/cluster.py:4894: OperationTimedOut {code} https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19540) Test Failure: test_sstablelevelreset
[ https://issues.apache.org/jira/browse/CASSANDRA-19540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19540: Bug Category: Parent values: Correctness(12982)Level 1 values: Test Failure(12990) Complexity: Normal Component/s: CI Discovered By: User Report Severity: Normal Status: Open (was: Triage Needed) > Test Failure: test_sstablelevelreset > > > Key: CASSANDRA-19540 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19540 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/python >Reporter: Ekaterina Dimitrova >Priority: Normal > > {code:java} > self = > def test_sstablelevelreset(self): > """ > Insert data and call sstablelevelreset on a series of > tables. Confirm level is reset to 0 using its output. > Test a variety of possible errors and ensure response is resonable. > @since 2.1.5 > @jira_ticket CASSANDRA-7614 > """ > cluster = self.cluster > cluster.populate(1).start() > node1 = cluster.nodelist()[0] > > # test by trying to run on nonexistent keyspace > cluster.stop(gently=False) > try: > node1.run_sstablelevelreset("keyspace1", "standard1") > except ToolError as e: > assert re.search("ColumnFamily not found: keyspace1/standard1", > str(e)) > # this should return exit code 1 > assert e.exit_status == 1, "Expected sstablelevelreset to have a > return code of 1 == but instead return code was {}".format( > e.exit_status) > > # now test by generating keyspace but not flushing sstables > cluster.start() > node1.stress(['write', 'n=100', 'no-warmup', '-schema', > 'replication(factor=1)', > '-rate', 'threads=8']) > cluster.stop(gently=False) > > output, error, rc = node1.run_sstablelevelreset("keyspace1", > "standard1") > self._check_stderr_error(error) > assert re.search("Found no sstables, did you give the correct > keyspace", output) > assert rc == 0, str(rc) > > # test by writing small amount of data and flushing (all sstables > should be level 0) > cluster.start() > session = self.patient_cql_connection(node1) > > session.execute( > "ALTER TABLE keyspace1.standard1 with compaction={'class': > 'LeveledCompactionStrategy', 'sstable_size_in_mb':1};") > offline_tools_test.py:64: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/src/cassandra-driver/cassandra/cluster.py:2618: in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state, host, execute_as).result() > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > self = request timeout. See Session.execute[_async](timeout)'}, > last_host=127.0.0.1:9042 coordinator_host=None> > def result(self): > """ > Return the final result or raise an Exception if errors were > encountered. If the final result or error has not been set > yet, this method will block until it is set, or the timeout > set for the request expires. > > Timeout is specified in the Session request execution functions. > If the timeout is exceeded, an :exc:`cassandra.OperationTimedOut` > will be raised. > This is a client-side timeout. For more information > about server-side coordinator timeouts, see > :class:`.policies.RetryPolicy`. > > Example usage:: > > >>> future = session.execute_async("SELECT * FROM mycf") > >>> # do other stuff... > > >>> try: > ... rows = future.result() > ... for row in rows: > ... ... # process results > ... except Exception: > ... log.exception("Operation failed:") > > """ > self._event.wait() > if self._final_result is not _NOT_SET: > return ResultSet(self, self._final_result) > else: > > raise self._final_exception > E cassandra.OperationTimedOut: errors={'127.0.0.1:9042': 'Client > request timeout. See Session.execute[_async](timeout)'}, > last_host=127.0.0.1:9042 > ../env3.8/src/cassandra-driver/cassandra/cluster.py:4894: OperationTimedOut > test_sstableofflinerelevel > FLAKY > offline_tools_test.TestOfflineTools > test_resumable_decommission > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests
[jira] [Created] (CASSANDRA-19540) Test Failure: test_sstablelevelreset
Ekaterina Dimitrova created CASSANDRA-19540: --- Summary: Test Failure: test_sstablelevelreset Key: CASSANDRA-19540 URL: https://issues.apache.org/jira/browse/CASSANDRA-19540 Project: Cassandra Issue Type: Bug Components: Test/dtest/python Reporter: Ekaterina Dimitrova {code:java} self = def test_sstablelevelreset(self): """ Insert data and call sstablelevelreset on a series of tables. Confirm level is reset to 0 using its output. Test a variety of possible errors and ensure response is resonable. @since 2.1.5 @jira_ticket CASSANDRA-7614 """ cluster = self.cluster cluster.populate(1).start() node1 = cluster.nodelist()[0] # test by trying to run on nonexistent keyspace cluster.stop(gently=False) try: node1.run_sstablelevelreset("keyspace1", "standard1") except ToolError as e: assert re.search("ColumnFamily not found: keyspace1/standard1", str(e)) # this should return exit code 1 assert e.exit_status == 1, "Expected sstablelevelreset to have a return code of 1 == but instead return code was {}".format( e.exit_status) # now test by generating keyspace but not flushing sstables cluster.start() node1.stress(['write', 'n=100', 'no-warmup', '-schema', 'replication(factor=1)', '-rate', 'threads=8']) cluster.stop(gently=False) output, error, rc = node1.run_sstablelevelreset("keyspace1", "standard1") self._check_stderr_error(error) assert re.search("Found no sstables, did you give the correct keyspace", output) assert rc == 0, str(rc) # test by writing small amount of data and flushing (all sstables should be level 0) cluster.start() session = self.patient_cql_connection(node1) > session.execute( "ALTER TABLE keyspace1.standard1 with compaction={'class': 'LeveledCompactionStrategy', 'sstable_size_in_mb':1};") offline_tools_test.py:64: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../env3.8/src/cassandra-driver/cassandra/cluster.py:2618: in execute return self.execute_async(query, parameters, trace, custom_payload, timeout, execution_profile, paging_state, host, execute_as).result() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = def result(self): """ Return the final result or raise an Exception if errors were encountered. If the final result or error has not been set yet, this method will block until it is set, or the timeout set for the request expires. Timeout is specified in the Session request execution functions. If the timeout is exceeded, an :exc:`cassandra.OperationTimedOut` will be raised. This is a client-side timeout. For more information about server-side coordinator timeouts, see :class:`.policies.RetryPolicy`. Example usage:: >>> future = session.execute_async("SELECT * FROM mycf") >>> # do other stuff... >>> try: ... rows = future.result() ... for row in rows: ... ... # process results ... except Exception: ... log.exception("Operation failed:") """ self._event.wait() if self._final_result is not _NOT_SET: return ResultSet(self, self._final_result) else: > raise self._final_exception E cassandra.OperationTimedOut: errors={'127.0.0.1:9042': 'Client request timeout. See Session.execute[_async](timeout)'}, last_host=127.0.0.1:9042 ../env3.8/src/cassandra-driver/cassandra/cluster.py:4894: OperationTimedOut test_sstableofflinerelevel FLAKY offline_tools_test.TestOfflineTools test_resumable_decommission {code} https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 The failure looks different in Jenkins: https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/testReport/junit/dtest-latest.offline_tools_test/TestOfflineTools/Tests___dtest_latest_jdk11_35_64___test_sstablelevelreset/ -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834997#comment-17834997 ] Brandon Williams commented on CASSANDRA-19448: -- I left one small note about a lingering todo that I can remove on commit if you agree. ||Branch||CI|| |[4.0|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-4.0]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1566/workflows/4fae134a-fb02-4b6e-bb97-dd781651b9ef], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1566/workflows/49ac5ce2-0a63-48b5-bc41-da30073dfd73]| |[4.1|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-4.1]|[j8|https://app.circleci.com/pipelines/github/driftx/cassandra/1567/workflows/efaf1236-111d-44cd-bee7-f0cffcc99986], [j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1567/workflows/e9c2a9b3-2516-4b11-b701-5077144afe75]| |[5.0|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-5.0]|[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1569/workflows/601b0002-7536-4343-8499-e30368d4432d], [j17|https://app.circleci.com/pipelines/github/driftx/cassandra/1569/workflows/839908dc-5622-46da-bd04-4152a1305a43]| |[trunk|https://github.com/driftx/cassandra/tree/CASSANDRA-19448-trunk]|[j11|https://app.circleci.com/pipelines/github/driftx/cassandra/1568/workflows/52d7f8aa-b2ee-425f-ad8e-a031ed6f7526], [j17|https://app.circleci.com/pipelines/github/driftx/cassandra/1568/workflows/ed32dc18-3c16-4661-aaf8-570fcd48d608]| > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19539) Test Failure: test_bootstrap_with_reset_bootstrap_state
[ https://issues.apache.org/jira/browse/CASSANDRA-19539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19539: Component/s: Test/dtest/python > Test Failure: test_bootstrap_with_reset_bootstrap_state > --- > > Key: CASSANDRA-19539 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19539 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/python >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > Failing on trunk: > {code:java} > ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after 180.22/180 > seconds Missing: ['Starting listening for CQL clients'] not found in > system.log: > Head: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura > Tail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when sending > TCM_COMMIT_REQ, retrying on CandidateIterator{candidates=[/127.0.0.1:7000], > checkLive=true} > self = > @since('2.2') > def test_bootstrap_with_reset_bootstrap_state(self): > """Test bootstrap with resetting bootstrap progress""" > cluster = self.cluster > > cluster.set_environment_variable('CASSANDRA_TOKEN_PREGENERATION_DISABLED', > 'True') > > cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec': > 1}) > cluster.populate(2).start() > > node1 = cluster.nodes['node1'] > node1.stress(['write', 'n=100K', '-schema', 'replication(factor=2)']) > node1.flush() > > # kill node1 in the middle of streaming to let it fail > t = InterruptBootstrap(node1) > t.start() > > # start bootstrapping node3 and wait for streaming > node3 = new_node(cluster) > try: > node3.start() > except NodeError: > pass # node doesn't start as expected > t.join() > node1.start() > > # restart node3 bootstrap with resetting bootstrap progress > node3.stop(signal_event=signal.SIGKILL) > mark = node3.mark_log() > node3.start(jvm_args=["-Dcassandra.reset_bootstrap_progress=true"]) > # check if we reset bootstrap state > node3.watch_log_for("Resetting bootstrap progress to start fresh", > from_mark=mark) > # wait for node3 ready to query, 180s as the node needs to bootstrap > > node3.wait_for_binary_interface(from_mark=mark, timeout=180) > bootstrap_test.py:513: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:709: in > wait_for_binary_interface > self.watch_log_for("Starting listening for CQL clients", **kwargs) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for > TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > start = 1712173092.936025, timeout = 180 > msg = "Missing: ['Starting listening for CQL clients'] not found in > system.log:\n Head: INFO [main] 2024-04-03 19:37:59,845...00: TIMEOUT when > sending TCM_COMMIT_REQ, retrying on > CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true}\n" > node = 'node3' > @staticmethod > def raise_if_passed(start, timeout, msg, node=None): > if start + timeout < time.time(): > > raise TimeoutError.create(start, timeout, msg, node) > E ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after > 180.22/180 seconds Missing: ['Starting listening for CQL clients'] not found > in system.log: > EHead: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura > ETail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when > sending TCM_COMMIT_REQ, retrying on > CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true} > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 > https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19538) Test Failure: test_assassinate_valid_node
[ https://issues.apache.org/jira/browse/CASSANDRA-19538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19538: Component/s: Test/dtest/python > Test Failure: test_assassinate_valid_node > - > > Key: CASSANDRA-19538 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19538 > Project: Cassandra > Issue Type: Bug > Components: CI, Test/dtest/python >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > Failing consistently on trunk: > {code:java} > ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after 120.11/120 > seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log: > Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 > Tail: ... some nodes were not ready > INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 > - Setup task failed with error, rescheduling > self = > def test_assassinate_valid_node(self): > """ > @jira_ticket CASSANDRA-16588 > Test that after taking two non-seed nodes down and assassinating > one of them, the other can come back up. > """ > cluster = self.cluster > > cluster.populate(5).start() > node1 = cluster.nodelist()[0] > node3 = cluster.nodelist()[2] > > self.cluster.set_configuration_options({ > 'seed_provider': [{'class_name': > 'org.apache.cassandra.locator.SimpleSeedProvider', >'parameters': [{'seeds': node1.address()}] > }] > }) > > non_seed_nodes = cluster.nodelist()[-2:] > for node in non_seed_nodes: > node.stop() > > assassination_target = non_seed_nodes[0] > logger.debug("Assassinating non-seed node > {}".format(assassination_target.address())) > out, err, _ = node1.nodetool("assassinate > {}".format(assassination_target.address())) > assert_stderr_clean(err) > > logger.debug("Starting non-seed nodes") > for node in non_seed_nodes: > > node.start() > gossip_test.py:78: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:915: in start > node.watch_log_for_alive(self, from_mark=mark) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:684: in > watch_log_for_alive > self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, > filename=filename) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for > TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > start = 1712173052.8186479, timeout = 120 > msg = "Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log:\n > Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 1...[OptionalTasks:1] > 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 - Setup task failed > with error, rescheduling\n" > node = 'node1' > @staticmethod > def raise_if_passed(start, timeout, msg, node=None): > if start + timeout < time.time(): > > raise TimeoutError.create(start, timeout, msg, node) > E ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after > 120.11/120 seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in > system.log: > EHead: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 > ETail: ... some nodes were not ready > E INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 > CassandraRoleManager.java:484 - Setup task failed with error, rescheduling > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 > https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19539) Test Failure: test_bootstrap_with_reset_bootstrap_state
[ https://issues.apache.org/jira/browse/CASSANDRA-19539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19539: Fix Version/s: 5.x > Test Failure: test_bootstrap_with_reset_bootstrap_state > --- > > Key: CASSANDRA-19539 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19539 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > Failing on trunk: > {code:java} > ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after 180.22/180 > seconds Missing: ['Starting listening for CQL clients'] not found in > system.log: > Head: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura > Tail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when sending > TCM_COMMIT_REQ, retrying on CandidateIterator{candidates=[/127.0.0.1:7000], > checkLive=true} > self = > @since('2.2') > def test_bootstrap_with_reset_bootstrap_state(self): > """Test bootstrap with resetting bootstrap progress""" > cluster = self.cluster > > cluster.set_environment_variable('CASSANDRA_TOKEN_PREGENERATION_DISABLED', > 'True') > > cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec': > 1}) > cluster.populate(2).start() > > node1 = cluster.nodes['node1'] > node1.stress(['write', 'n=100K', '-schema', 'replication(factor=2)']) > node1.flush() > > # kill node1 in the middle of streaming to let it fail > t = InterruptBootstrap(node1) > t.start() > > # start bootstrapping node3 and wait for streaming > node3 = new_node(cluster) > try: > node3.start() > except NodeError: > pass # node doesn't start as expected > t.join() > node1.start() > > # restart node3 bootstrap with resetting bootstrap progress > node3.stop(signal_event=signal.SIGKILL) > mark = node3.mark_log() > node3.start(jvm_args=["-Dcassandra.reset_bootstrap_progress=true"]) > # check if we reset bootstrap state > node3.watch_log_for("Resetting bootstrap progress to start fresh", > from_mark=mark) > # wait for node3 ready to query, 180s as the node needs to bootstrap > > node3.wait_for_binary_interface(from_mark=mark, timeout=180) > bootstrap_test.py:513: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:709: in > wait_for_binary_interface > self.watch_log_for("Starting listening for CQL clients", **kwargs) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for > TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > start = 1712173092.936025, timeout = 180 > msg = "Missing: ['Starting listening for CQL clients'] not found in > system.log:\n Head: INFO [main] 2024-04-03 19:37:59,845...00: TIMEOUT when > sending TCM_COMMIT_REQ, retrying on > CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true}\n" > node = 'node3' > @staticmethod > def raise_if_passed(start, timeout, msg, node=None): > if start + timeout < time.time(): > > raise TimeoutError.create(start, timeout, msg, node) > E ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after > 180.22/180 seconds Missing: ['Starting listening for CQL clients'] not found > in system.log: > EHead: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura > ETail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when > sending TCM_COMMIT_REQ, retrying on > CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true} > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 > https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19539) Test Failure: test_bootstrap_with_reset_bootstrap_state
[ https://issues.apache.org/jira/browse/CASSANDRA-19539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19539: Bug Category: Parent values: Correctness(12982)Level 1 values: Test Failure(12990) Complexity: Normal Component/s: CI Discovered By: User Report Severity: Normal Status: Open (was: Triage Needed) > Test Failure: test_bootstrap_with_reset_bootstrap_state > --- > > Key: CASSANDRA-19539 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19539 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Priority: Normal > > Failing on trunk: > {code:java} > ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after 180.22/180 > seconds Missing: ['Starting listening for CQL clients'] not found in > system.log: > Head: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura > Tail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when sending > TCM_COMMIT_REQ, retrying on CandidateIterator{candidates=[/127.0.0.1:7000], > checkLive=true} > self = > @since('2.2') > def test_bootstrap_with_reset_bootstrap_state(self): > """Test bootstrap with resetting bootstrap progress""" > cluster = self.cluster > > cluster.set_environment_variable('CASSANDRA_TOKEN_PREGENERATION_DISABLED', > 'True') > > cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec': > 1}) > cluster.populate(2).start() > > node1 = cluster.nodes['node1'] > node1.stress(['write', 'n=100K', '-schema', 'replication(factor=2)']) > node1.flush() > > # kill node1 in the middle of streaming to let it fail > t = InterruptBootstrap(node1) > t.start() > > # start bootstrapping node3 and wait for streaming > node3 = new_node(cluster) > try: > node3.start() > except NodeError: > pass # node doesn't start as expected > t.join() > node1.start() > > # restart node3 bootstrap with resetting bootstrap progress > node3.stop(signal_event=signal.SIGKILL) > mark = node3.mark_log() > node3.start(jvm_args=["-Dcassandra.reset_bootstrap_progress=true"]) > # check if we reset bootstrap state > node3.watch_log_for("Resetting bootstrap progress to start fresh", > from_mark=mark) > # wait for node3 ready to query, 180s as the node needs to bootstrap > > node3.wait_for_binary_interface(from_mark=mark, timeout=180) > bootstrap_test.py:513: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:709: in > wait_for_binary_interface > self.watch_log_for("Starting listening for CQL clients", **kwargs) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for > TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > start = 1712173092.936025, timeout = 180 > msg = "Missing: ['Starting listening for CQL clients'] not found in > system.log:\n Head: INFO [main] 2024-04-03 19:37:59,845...00: TIMEOUT when > sending TCM_COMMIT_REQ, retrying on > CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true}\n" > node = 'node3' > @staticmethod > def raise_if_passed(start, timeout, msg, node=None): > if start + timeout < time.time(): > > raise TimeoutError.create(start, timeout, msg, node) > E ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after > 180.22/180 seconds Missing: ['Starting listening for CQL clients'] not found > in system.log: > EHead: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura > ETail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when > sending TCM_COMMIT_REQ, retrying on > CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true} > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 > https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19539) Test Failure: test_bootstrap_with_reset_bootstrap_state
Ekaterina Dimitrova created CASSANDRA-19539: --- Summary: Test Failure: test_bootstrap_with_reset_bootstrap_state Key: CASSANDRA-19539 URL: https://issues.apache.org/jira/browse/CASSANDRA-19539 Project: Cassandra Issue Type: Bug Reporter: Ekaterina Dimitrova Failing on trunk: {code:java} ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after 180.22/180 seconds Missing: ['Starting listening for CQL clients'] not found in system.log: Head: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura Tail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when sending TCM_COMMIT_REQ, retrying on CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true} self = @since('2.2') def test_bootstrap_with_reset_bootstrap_state(self): """Test bootstrap with resetting bootstrap progress""" cluster = self.cluster cluster.set_environment_variable('CASSANDRA_TOKEN_PREGENERATION_DISABLED', 'True') cluster.set_configuration_options(values={'stream_throughput_outbound_megabits_per_sec': 1}) cluster.populate(2).start() node1 = cluster.nodes['node1'] node1.stress(['write', 'n=100K', '-schema', 'replication(factor=2)']) node1.flush() # kill node1 in the middle of streaming to let it fail t = InterruptBootstrap(node1) t.start() # start bootstrapping node3 and wait for streaming node3 = new_node(cluster) try: node3.start() except NodeError: pass # node doesn't start as expected t.join() node1.start() # restart node3 bootstrap with resetting bootstrap progress node3.stop(signal_event=signal.SIGKILL) mark = node3.mark_log() node3.start(jvm_args=["-Dcassandra.reset_bootstrap_progress=true"]) # check if we reset bootstrap state node3.watch_log_for("Resetting bootstrap progress to start fresh", from_mark=mark) # wait for node3 ready to query, 180s as the node needs to bootstrap > node3.wait_for_binary_interface(from_mark=mark, timeout=180) bootstrap_test.py:513: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:709: in wait_for_binary_interface self.watch_log_for("Starting listening for CQL clients", **kwargs) ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ start = 1712173092.936025, timeout = 180 msg = "Missing: ['Starting listening for CQL clients'] not found in system.log:\n Head: INFO [main] 2024-04-03 19:37:59,845...00: TIMEOUT when sending TCM_COMMIT_REQ, retrying on CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true}\n" node = 'node3' @staticmethod def raise_if_passed(start, timeout, msg, node=None): if start + timeout < time.time(): > raise TimeoutError.create(start, timeout, msg, node) E ccmlib.node.TimeoutError: 03 Apr 2024 19:41:13 [node3] after 180.22/180 seconds Missing: ['Starting listening for CQL clients'] not found in system.log: EHead: INFO [main] 2024-04-03 19:37:59,845 YamlConfigura ETail: ...19 - Got error from /127.0.0.1:7000: TIMEOUT when sending TCM_COMMIT_REQ, retrying on CandidateIterator{candidates=[/127.0.0.1:7000], checkLive=true} ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError {code} https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19538) Test Failure: test_assassinate_valid_node
[ https://issues.apache.org/jira/browse/CASSANDRA-19538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19538: Fix Version/s: 5.x > Test Failure: test_assassinate_valid_node > - > > Key: CASSANDRA-19538 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19538 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Priority: Normal > Fix For: 5.x > > > Failing consistently on trunk: > {code:java} > ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after 120.11/120 > seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log: > Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 > Tail: ... some nodes were not ready > INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 > - Setup task failed with error, rescheduling > self = > def test_assassinate_valid_node(self): > """ > @jira_ticket CASSANDRA-16588 > Test that after taking two non-seed nodes down and assassinating > one of them, the other can come back up. > """ > cluster = self.cluster > > cluster.populate(5).start() > node1 = cluster.nodelist()[0] > node3 = cluster.nodelist()[2] > > self.cluster.set_configuration_options({ > 'seed_provider': [{'class_name': > 'org.apache.cassandra.locator.SimpleSeedProvider', >'parameters': [{'seeds': node1.address()}] > }] > }) > > non_seed_nodes = cluster.nodelist()[-2:] > for node in non_seed_nodes: > node.stop() > > assassination_target = non_seed_nodes[0] > logger.debug("Assassinating non-seed node > {}".format(assassination_target.address())) > out, err, _ = node1.nodetool("assassinate > {}".format(assassination_target.address())) > assert_stderr_clean(err) > > logger.debug("Starting non-seed nodes") > for node in non_seed_nodes: > > node.start() > gossip_test.py:78: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:915: in start > node.watch_log_for_alive(self, from_mark=mark) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:684: in > watch_log_for_alive > self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, > filename=filename) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for > TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > start = 1712173052.8186479, timeout = 120 > msg = "Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log:\n > Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 1...[OptionalTasks:1] > 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 - Setup task failed > with error, rescheduling\n" > node = 'node1' > @staticmethod > def raise_if_passed(start, timeout, msg, node=None): > if start + timeout < time.time(): > > raise TimeoutError.create(start, timeout, msg, node) > E ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after > 120.11/120 seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in > system.log: > EHead: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 > ETail: ... some nodes were not ready > E INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 > CassandraRoleManager.java:484 - Setup task failed with error, rescheduling > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 > https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19538) Test Failure: test_assassinate_valid_node
[ https://issues.apache.org/jira/browse/CASSANDRA-19538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekaterina Dimitrova updated CASSANDRA-19538: Bug Category: Parent values: Correctness(12982)Level 1 values: Test Failure(12990) Complexity: Normal Component/s: CI Discovered By: User Report Severity: Normal Status: Open (was: Triage Needed) > Test Failure: test_assassinate_valid_node > - > > Key: CASSANDRA-19538 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19538 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: Ekaterina Dimitrova >Priority: Normal > > Failing consistently on trunk: > {code:java} > ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after 120.11/120 > seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log: > Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 > Tail: ... some nodes were not ready > INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 > - Setup task failed with error, rescheduling > self = > def test_assassinate_valid_node(self): > """ > @jira_ticket CASSANDRA-16588 > Test that after taking two non-seed nodes down and assassinating > one of them, the other can come back up. > """ > cluster = self.cluster > > cluster.populate(5).start() > node1 = cluster.nodelist()[0] > node3 = cluster.nodelist()[2] > > self.cluster.set_configuration_options({ > 'seed_provider': [{'class_name': > 'org.apache.cassandra.locator.SimpleSeedProvider', >'parameters': [{'seeds': node1.address()}] > }] > }) > > non_seed_nodes = cluster.nodelist()[-2:] > for node in non_seed_nodes: > node.stop() > > assassination_target = non_seed_nodes[0] > logger.debug("Assassinating non-seed node > {}".format(assassination_target.address())) > out, err, _ = node1.nodetool("assassinate > {}".format(assassination_target.address())) > assert_stderr_clean(err) > > logger.debug("Starting non-seed nodes") > for node in non_seed_nodes: > > node.start() > gossip_test.py:78: > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:915: in start > node.watch_log_for_alive(self, from_mark=mark) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:684: in > watch_log_for_alive > self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, > filename=filename) > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for > TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > _ > start = 1712173052.8186479, timeout = 120 > msg = "Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log:\n > Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 1...[OptionalTasks:1] > 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 - Setup task failed > with error, rescheduling\n" > node = 'node1' > @staticmethod > def raise_if_passed(start, timeout, msg, node=None): > if start + timeout < time.time(): > > raise TimeoutError.create(start, timeout, msg, node) > E ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after > 120.11/120 seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in > system.log: > EHead: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 > ETail: ... some nodes were not ready > E INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 > CassandraRoleManager.java:484 - Setup task failed with error, rescheduling > ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError > {code} > https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 > https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-19538) Test Failure: test_assassinate_valid_node
Ekaterina Dimitrova created CASSANDRA-19538: --- Summary: Test Failure: test_assassinate_valid_node Key: CASSANDRA-19538 URL: https://issues.apache.org/jira/browse/CASSANDRA-19538 Project: Cassandra Issue Type: Bug Reporter: Ekaterina Dimitrova Failing consistently on trunk: {code:java} ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after 120.11/120 seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log: Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 Tail: ... some nodes were not ready INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 - Setup task failed with error, rescheduling self = def test_assassinate_valid_node(self): """ @jira_ticket CASSANDRA-16588 Test that after taking two non-seed nodes down and assassinating one of them, the other can come back up. """ cluster = self.cluster cluster.populate(5).start() node1 = cluster.nodelist()[0] node3 = cluster.nodelist()[2] self.cluster.set_configuration_options({ 'seed_provider': [{'class_name': 'org.apache.cassandra.locator.SimpleSeedProvider', 'parameters': [{'seeds': node1.address()}] }] }) non_seed_nodes = cluster.nodelist()[-2:] for node in non_seed_nodes: node.stop() assassination_target = non_seed_nodes[0] logger.debug("Assassinating non-seed node {}".format(assassination_target.address())) out, err, _ = node1.nodetool("assassinate {}".format(assassination_target.address())) assert_stderr_clean(err) logger.debug("Starting non-seed nodes") for node in non_seed_nodes: > node.start() gossip_test.py:78: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:915: in start node.watch_log_for_alive(self, from_mark=mark) ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:684: in watch_log_for_alive self.watch_log_for(tofind, from_mark=from_mark, timeout=timeout, filename=filename) ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:608: in watch_log_for TimeoutError.raise_if_passed(start=start, timeout=timeout, node=self.name, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ start = 1712173052.8186479, timeout = 120 msg = "Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log:\n Head: INFO [Messaging-EventLoop-3-1] 2024-04-03 1...[OptionalTasks:1] 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 - Setup task failed with error, rescheduling\n" node = 'node1' @staticmethod def raise_if_passed(start, timeout, msg, node=None): if start + timeout < time.time(): > raise TimeoutError.create(start, timeout, msg, node) E ccmlib.node.TimeoutError: 03 Apr 2024 19:39:32 [node1] after 120.11/120 seconds Missing: ['127.0.0.4:7000.* is now UP'] not found in system.log: EHead: INFO [Messaging-EventLoop-3-1] 2024-04-03 19:37:3 ETail: ... some nodes were not ready E INFO [OptionalTasks:1] 2024-04-03 19:39:30,454 CassandraRoleManager.java:484 - Setup task failed with error, rescheduling ../env3.8/lib/python3.8/site-packages/ccmlib/node.py:56: TimeoutError {code} https://app.circleci.com/pipelines/github/ekaterinadimitrova2/cassandra/2680/workflows/8b1c0d0a-7458-4b43-9bba-ac96b9bfe64f/jobs/58929/tests#failed-test-0 https://ci-cassandra.apache.org/job/Cassandra-trunk/1859/#showFailuresLink -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19128) The result of applying a metadata snapshot via ForceSnapshot should return the correct set of modified keys
[ https://issues.apache.org/jira/browse/CASSANDRA-19128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-19128: Attachment: ci_summary-1.html > The result of applying a metadata snapshot via ForceSnapshot should return > the correct set of modified keys > --- > > Key: CASSANDRA-19128 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19128 > Project: Cassandra > Issue Type: Improvement > Components: Cluster/Membership >Reporter: Marcus Eriksson >Assignee: Alex Petrov >Priority: High > Fix For: 5.1-alpha1 > > Attachments: ci_summary-1.html, ci_summary.html > > Time Spent: 40m > Remaining Estimate: 0h > > It should use the same logic as Transformer::build to compare the updated CM > with the previous to derive the modified keys -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Updating README JIRA link to ASF JIRA [cassandra-java-driver]
absurdfarce commented on PR #1921: URL: https://github.com/apache/cassandra-java-driver/pull/1921#issuecomment-2043140438 I apparently did something very unpleasant to my local feature branch, upshot being it was easier to re-apply changes and start over. Resulting was #1926 which has now been merged. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Updating README JIRA link to ASF JIRA [cassandra-java-driver]
absurdfarce closed pull request #1921: Updating README JIRA link to ASF JIRA URL: https://github.com/apache/cassandra-java-driver/pull/1921 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Update link to JIRA to ASF instance. [cassandra-java-driver]
absurdfarce merged PR #1926: URL: https://github.com/apache/cassandra-java-driver/pull/1926 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[PR] Update link to JIRA to ASF instance. [cassandra-java-driver]
absurdfarce opened a new pull request, #1926: URL: https://github.com/apache/cassandra-java-driver/pull/1926 Manually squashed version of PR #1921 . Somehow I made a mess of my local branch; t'was easier to just start over from scratch. Approvals can be found on the original PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra-java-driver) branch 4.x updated: Update link to JIRA to ASF instance. Also include information about populating the component field.
This is an automated email from the ASF dual-hosted git repository. absurdfarce pushed a commit to branch 4.x in repository https://gitbox.apache.org/repos/asf/cassandra-java-driver.git The following commit(s) were added to refs/heads/4.x by this push: new 9c41aab6f Update link to JIRA to ASF instance. Also include information about populating the component field. 9c41aab6f is described below commit 9c41aab6fd0a55d977a9844610d230b1e69868d7 Author: absurdfarce AuthorDate: Mon Apr 8 11:00:46 2024 -0500 Update link to JIRA to ASF instance. Also include information about populating the component field. Patch by Bret McGuire; reviewed by Bret McGuire, Alexandre Dutra --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 2e8fe862f..c53c8f2db 100644 --- a/README.md +++ b/README.md @@ -74,13 +74,13 @@ See the [Cassandra error handling done right blog](https://www.datastax.com/blog * [Manual](manual/) * [API docs] -* Bug tracking: [JIRA] +* Bug tracking: [JIRA]. Make sure to select the "Client/java-driver" component when filing new tickets! * [Mailing list] * [Changelog] * [FAQ] [API docs]: https://docs.datastax.com/en/drivers/java/4.17 -[JIRA]: https://datastax-oss.atlassian.net/browse/JAVA +[JIRA]: https://issues.apache.org/jira/issues/?jql=project%20%3D%20CASSANDRA%20AND%20component%20%3D%20%22Client%2Fjava-driver%22%20ORDER%20BY%20key%20DESC [Mailing list]: https://groups.google.com/a/lists.datastax.com/forum/#!forum/java-driver-user [Changelog]: changelog/ [FAQ]: faq/ - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834975#comment-17834975 ] Maxwell Guo edited comment on CASSANDRA-19448 at 4/8/24 3:57 PM: - ||Heading 1||Heading 2|| |trunk |[trunk|https://github.com/apache/cassandra/pull/3215]| |5.0|[5.0|https://github.com/apache/cassandra/pull/3236]| |4.1|[4.1|https://github.com/apache/cassandra/pull/3237]| |4.0|[4.0|https://github.com/apache/cassandra/pull/3238]| cc [~brandon.williams] was (Author: maxwellguo): ||Heading 1||Heading 2|| |trunk |[trunk|https://github.com/apache/cassandra/pull/3215]| |5.0|[5.0|https://github.com/apache/cassandra/pull/3236]| |4.1|[4.1|https://github.com/apache/cassandra/pull/3237]| ||4.0|[4.0|https://github.com/apache/cassandra/pull/3238]| > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[PR] Manually squashed version of PR 1921 [cassandra-java-driver]
absurdfarce opened a new pull request, #1925: URL: https://github.com/apache/cassandra-java-driver/pull/1925 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Manually squashed version of PR 1921 [cassandra-java-driver]
absurdfarce closed pull request #1925: Manually squashed version of PR 1921 URL: https://github.com/apache/cassandra-java-driver/pull/1925 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19448) CommitlogArchiver only has granularity to seconds for restore_point_in_time
[ https://issues.apache.org/jira/browse/CASSANDRA-19448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834975#comment-17834975 ] Maxwell Guo commented on CASSANDRA-19448: - ||Heading 1||Heading 2|| |trunk |[trunk|https://github.com/apache/cassandra/pull/3215]| |5.0|[5.0|https://github.com/apache/cassandra/pull/3236]| |4.1|[4.1|https://github.com/apache/cassandra/pull/3237]| ||4.0|[4.0|https://github.com/apache/cassandra/pull/3238]| > CommitlogArchiver only has granularity to seconds for restore_point_in_time > --- > > Key: CASSANDRA-19448 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19448 > Project: Cassandra > Issue Type: Bug > Components: Local/Commit Log >Reporter: Jeremy Hanna >Assignee: Maxwell Guo >Priority: Normal > Fix For: 4.0.x, 4.1.x, 5.0.x, 5.x > > Time Spent: 10m > Remaining Estimate: 0h > > Commitlog archiver allows users to backup commitlog files for the purpose of > doing point in time restores. The [configuration > file|https://github.com/apache/cassandra/blob/trunk/conf/commitlog_archiving.properties] > gives an example of down to the seconds granularity but then asks what > whether the timestamps are microseconds or milliseconds - defaulting to > microseconds. Because the [CommitLogArchiver uses a second based date > format|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/commitlog/CommitLogArchiver.java#L52], > if a user specifies to restore at something at a lower granularity like > milliseconds or microseconds, that means that the it will truncate everything > after the second and restore to that second. So say you specify a > restore_point_in_time like this: > restore_point_in_time=2024:01:18 17:01:01.623392 > it will silently truncate everything after the 01 seconds. So effectively to > the user, it is missing updates between 01 and 01.623392. > This appears to be a bug in the intent. We should allow users to specify > down to the millisecond or even microsecond level. If we allow them to > specify down to microseconds for the restore point in time, then it may > internally need to change from a long. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Update README.md [cassandra-java-driver]
absurdfarce commented on PR #1865: URL: https://github.com/apache/cassandra-java-driver/pull/1865#issuecomment-2043091026 Looks good, thanks @emeliawilkinson24 ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra-java-driver) branch 4.x updated: Update README.md
This is an automated email from the ASF dual-hosted git repository. absurdfarce pushed a commit to branch 4.x in repository https://gitbox.apache.org/repos/asf/cassandra-java-driver.git The following commit(s) were added to refs/heads/4.x by this push: new 4aa5abe70 Update README.md 4aa5abe70 is described below commit 4aa5abe701e529fd9be0c9b55214dad6f85f0649 Author: Emelia <105240296+emeliawilkinso...@users.noreply.github.com> AuthorDate: Fri Nov 17 15:22:47 2023 -0500 Update README.md Typo carried over from old docs, needed closing parenthesis. --- manual/cloud/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/manual/cloud/README.md b/manual/cloud/README.md index 48197c494..9116b03da 100644 --- a/manual/cloud/README.md +++ b/manual/cloud/README.md @@ -28,10 +28,10 @@ driver is configured in an application and that you will need to obtain a *secur 1. [Download][Download Maven] and [install][Install Maven] Maven. 2. Create an Astra database on [AWS/Azure/GCP][Create an Astra database - AWS/Azure/GCP]; alternatively, have a team member provide access to their - Astra database (instructions for [AWS/Azure/GCP][Access an Astra database - AWS/Azure/GCP] to + Astra database (see instructions for [AWS/Azure/GCP][Access an Astra database - AWS/Azure/GCP]) to obtain database connection details. -3. Download the secure connect bundle (instructions for - [AWS/Azure/GCP][Download the secure connect bundle - AWS/Azure/GCP], that contains connection +3. Download the secure connect bundle (see instructions for + [AWS/Azure/GCP][Download the secure connect bundle - AWS/Azure/GCP]) that contains connection information such as contact points and certificates. ### Procedure - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Update README.md [cassandra-java-driver]
absurdfarce merged PR #1865: URL: https://github.com/apache/cassandra-java-driver/pull/1865 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] Update README.md [cassandra-java-driver]
absurdfarce commented on PR #1865: URL: https://github.com/apache/cassandra-java-driver/pull/1865#issuecomment-2043087858 @michaelsembwever we've been fixing a few things incrementally around the docs; my general feeling is that's probably fine for now. It's not at all clear to me what the future of the Astra code is in the driver itself; the few times the topic has come up other devs working on the driver have been unexpectedly enthusiastic about keeping it in. Regardless, it's not getting yanked immediately, so it seems eminently reasonable to me to have docs for our current state of affairs (which includes support for the SCB) which may have to change at some point in the future. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-19221) CMS: Nodes can restart with new ipaddress already defined in the cluster
[ https://issues.apache.org/jira/browse/CASSANDRA-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov reassigned CASSANDRA-19221: --- Assignee: Alex Petrov > CMS: Nodes can restart with new ipaddress already defined in the cluster > > > Key: CASSANDRA-19221 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19221 > Project: Cassandra > Issue Type: Bug > Components: Transactional Cluster Metadata >Reporter: Paul Chandler >Assignee: Alex Petrov >Priority: Normal > Fix For: 5.1-alpha1 > > > I am simulating running a cluster in Kubernetes and testing what happens when > several pods go down and ip addresses are swapped between nodes. In 4.0 this > is blocked and the node cannot be restarted. > To simulate this I create a 3 node cluster on a local machine using 3 > loopback addresses > {code} > 127.0.0.1 > 127.0.0.2 > 127.0.0.3 > {code} > The nodes are created correctly and the first node is assigned as a CMS node > as shown: > {code} > bin/nodetool -p 7199 describecms > {code} > Cluster Metadata Service: > {code} > Members: /127.0.0.1:7000 > Is Member: true > Service State: LOCAL > {code} > At this point I bring down the nodes 127.0.0.2 and 127.0.0.3 and swap the ip > addresses for the rpc_address and listen_address > > The nodes come back as normal, but the nodeid has now been swapped against > the ip address: > Before: > {code} > Datacenter: datacenter1 > === > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns (effective) Host ID > Rack > UN 127.0.0.3 75.2 KiB 16 76.0% > 6d194555-f6eb-41d0-c000-0003 rack1 > UN 127.0.0.2 86.77 KiB 16 59.3% > 6d194555-f6eb-41d0-c000-0002 rack1 > UN 127.0.0.1 80.88 KiB 16 64.7% > 6d194555-f6eb-41d0-c000-0001 rack1 > {code} > After: > {code} > Datacenter: datacenter1 > === > Status=Up/Down > |/ State=Normal/Leaving/Joining/Moving > -- Address Load Tokens Owns (effective) Host ID > Rack > UN 127.0.0.3 149.62 KiB 16 76.0% > 6d194555-f6eb-41d0-c000-0003 rack1 > UN 127.0.0.2 155.48 KiB 16 59.3% > 6d194555-f6eb-41d0-c000-0002 rack1 > UN 127.0.0.1 75.74 KiB 16 64.7% > 6d194555-f6eb-41d0-c000-0001 rack1 > {code} > On previous tests of this I have created a table with a replication factor of > 1, inserted some data before the swap. After the swap the data on nodes 2 > and 3 is now missing. > One theory I have is that I am using different port numbers for the different > nodes, and I am only swapping the ip addresses and not the port numbers, so > the ip:port still looks unique > i.e. 127.0.0.2:9043 becomes 127.0.0.2:9044 > and 127.0.0.3:9044 becomes 127.0.0.3:9043 > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
Re: [PR] CASSANDRA-19457: Object reference in Micrometer metrics prevent GC from reclaiming Session instances [cassandra-java-driver]
absurdfarce commented on PR #1916: URL: https://github.com/apache/cassandra-java-driver/pull/1916#issuecomment-2043079384 Big 👍 to @adutra 's suggestion above. I was thinking about this a bit after we last talked about it @SiyaoIsHiding. We don't need to test the OOM directly; in this case we know the root cause of the OOM (session's not being cleaned up) so confirming that sessions are removed on shutdown should be more than good enough here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19525) Optionally avoid hint transfer during decommission - port from 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caleb Rackliffe updated CASSANDRA-19525: Status: Ready to Commit (was: Review In Progress) > Optionally avoid hint transfer during decommission - port from 5.0 > -- > > Key: CASSANDRA-19525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19525 > Project: Cassandra > Issue Type: Improvement > Components: Consistency/Hints >Reporter: Paul Chandler >Assignee: Paul Chandler >Priority: Normal > Fix For: 4.0.x, 4.1.x > > Attachments: CASSANDRA-19525_4.0.patch, CASSANDRA-19525_4.1.patch, > ci_summary.html > > > This ticket is to port the changes already made for > https://issues.apache.org/jira/browse/CASSANDRA-17808 to 4.0 and 4.1 > This will allow the option to turn off the transferring of hints during > decommission (specifically unbootstrap) > This also allows the hints to be transferred at a higher rate during > decommission, as the hinted_handoff_throttle is not divided by the number of > nodes in the cluster for the unbootstrap process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19525) Optionally avoid hint transfer during decommission - port from 5.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caleb Rackliffe updated CASSANDRA-19525: Reviewers: Brandon Williams, Caleb Rackliffe (was: Caleb Rackliffe) > Optionally avoid hint transfer during decommission - port from 5.0 > -- > > Key: CASSANDRA-19525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19525 > Project: Cassandra > Issue Type: Improvement > Components: Consistency/Hints >Reporter: Paul Chandler >Assignee: Paul Chandler >Priority: Normal > Fix For: 4.0.x, 4.1.x > > Attachments: CASSANDRA-19525_4.0.patch, CASSANDRA-19525_4.1.patch, > ci_summary.html > > > This ticket is to port the changes already made for > https://issues.apache.org/jira/browse/CASSANDRA-17808 to 4.0 and 4.1 > This will allow the option to turn off the transferring of hints during > decommission (specifically unbootstrap) > This also allows the hints to be transferred at a higher rate during > decommission, as the hinted_handoff_throttle is not divided by the number of > nodes in the cluster for the unbootstrap process. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19284) Harry overrides model
[ https://issues.apache.org/jira/browse/CASSANDRA-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-19284: Fix Version/s: 5.1 Source Control Link: https://github.com/apache/cassandra/commit/6b48f8a11dbad8c0653309eb8193fa6157bba5d8 Resolution: Fixed Status: Resolved (was: Ready to Commit) > Harry overrides model > - > > Key: CASSANDRA-19284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19284 > Project: Cassandra > Issue Type: New Feature > Components: Test/fuzz >Reporter: Alex Petrov >Assignee: Alex Petrov >Priority: High > Fix For: 5.1 > > Attachments: ci_summary-1.html, ci_summary.html, result_details.tar.gz > > > Harry model to allow providing specific values for the test. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19284) Harry overrides model
[ https://issues.apache.org/jira/browse/CASSANDRA-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834957#comment-17834957 ] Alex Petrov commented on CASSANDRA-19284: - [~maedhroz] thank you for the review! > Harry overrides model > - > > Key: CASSANDRA-19284 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19284 > Project: Cassandra > Issue Type: New Feature > Components: Test/fuzz >Reporter: Alex Petrov >Assignee: Alex Petrov >Priority: High > Fix For: 5.1 > > Attachments: ci_summary-1.html, ci_summary.html, result_details.tar.gz > > > Harry model to allow providing specific values for the test. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
(cassandra) branch trunk updated: Harry model that supports value overrides: an ability to provide specific values for clustering, regular, and static columns
This is an automated email from the ASF dual-hosted git repository. ifesdjeen pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra.git The following commit(s) were added to refs/heads/trunk by this push: new 6b48f8a11d Harry model that supports value overrides: an ability to provide specific values for clustering, regular, and static columns 6b48f8a11d is described below commit 6b48f8a11dbad8c0653309eb8193fa6157bba5d8 Author: Alex Petrov AuthorDate: Wed Jan 17 19:12:43 2024 +0100 Harry model that supports value overrides: an ability to provide specific values for clustering, regular, and static columns Patch by Alex Petrov; reviewed by Caleb Rackliffe for CASSANDRA-19284 --- .../cassandra/distributed/shared/ClusterUtils.java | 2 + .../fuzz/harry/examples/RepairBurnTest.java| 138 .../dsl/HistoryBuilderIntegrationTest.java | 153 ++--- .../HistoryBuilderOverridesIntegrationTest.java| 359 + .../integration/model/IntegrationTestBase.java | 12 +- .../model/ReconcilerIntegrationTest.java | 23 +- .../fuzz/ring/ConsistentBootstrapTest.java | 8 +- .../cassandra/fuzz/sai/SingleNodeSAITest.java | 11 +- .../cassandra/fuzz/sai/StaticsTortureTest.java | 20 +- .../cassandra/harry/checker/ModelChecker.java | 1 + .../org/apache/cassandra/harry/ddl/ColumnSpec.java | 44 ++- .../org/apache/cassandra/harry/ddl/SchemaSpec.java | 36 ++- .../apache/cassandra/harry/dsl/ArrayWrapper.java | 49 +++ .../cassandra/harry/dsl/BatchVisitBuilder.java | 11 +- .../apache/cassandra/harry/dsl/HistoryBuilder.java | 141 +--- .../cassandra/harry/dsl/OverridingBijection.java | 84 + .../cassandra/harry/dsl/OverridingCkGenerator.java | 153 + .../cassandra/harry/dsl/PartitionVisitState.java | 63 ++-- .../harry/dsl/PartitionVisitStateImpl.java | 115 +++ .../harry/dsl/ReplayingHistoryBuilder.java | 13 +- .../harry/dsl/SingleOperationBuilder.java | 5 +- .../harry/dsl/SingleOperationVisitBuilder.java | 72 +++-- .../harry/dsl/ValueDescriptorIndexGenerator.java | 13 +- .../apache/cassandra/harry/dsl/ValueHelper.java| 74 + .../apache/cassandra/harry/dsl/ValueOverrides.java | 24 ++ .../apache/cassandra/harry/gen/DataGenerators.java | 24 +- .../apache/cassandra/harry/model/NoOpChecker.java | 16 +- .../cassandra/harry/operations/Relation.java | 2 +- 28 files changed, 1445 insertions(+), 221 deletions(-) diff --git a/test/distributed/org/apache/cassandra/distributed/shared/ClusterUtils.java b/test/distributed/org/apache/cassandra/distributed/shared/ClusterUtils.java index 3d3b9f3958..3e60a02523 100644 --- a/test/distributed/org/apache/cassandra/distributed/shared/ClusterUtils.java +++ b/test/distributed/org/apache/cassandra/distributed/shared/ClusterUtils.java @@ -545,6 +545,8 @@ public class ClusterUtils public static void unpauseCommits(IInvokableInstance instance) { +if (instance.isShutdown()) +return; instance.runOnInstance(() -> { TestProcessor processor = (TestProcessor) ((ClusterMetadataService.SwitchableProcessor) ClusterMetadataService.instance().processor()).delegate(); processor.unpause(); diff --git a/test/distributed/org/apache/cassandra/fuzz/harry/examples/RepairBurnTest.java b/test/distributed/org/apache/cassandra/fuzz/harry/examples/RepairBurnTest.java new file mode 100644 index 00..4092d6a02f --- /dev/null +++ b/test/distributed/org/apache/cassandra/fuzz/harry/examples/RepairBurnTest.java @@ -0,0 +1,138 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.cassandra.fuzz.harry.examples; + +import java.util.Arrays; +import java.util.Random; + +import org.junit.BeforeClass; +import org.junit.Test; + +import org.apache.cassandra.distributed.api.Feature; +import org.apache.cassandra.fuzz.harry.integration.model.IntegrationTestBase; +import org.apache.cassandra.harry.checker.ModelChecker; +import org.apache.cassandra.harry.ddl.ColumnSpec; +import org.apache.cassandra.harry.ddl.SchemaSpec; +import org.apache.ca
[jira] [Commented] (CASSANDRA-19529) Latency regression on 4.1 comparing to 4.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834942#comment-17834942 ] Nicolas Henneaux commented on CASSANDRA-19529: -- It's between 1-2ms addition so not big but not negligible. > Latency regression on 4.1 comparing to 4.0 > -- > > Key: CASSANDRA-19529 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19529 > Project: Cassandra > Issue Type: Bug > Components: Feature/Authorization >Reporter: Nicolas Henneaux >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Attachments: screenshot-1.png, screenshot-2.png > > > When upgrading from Cassandra 4.0.10 to 4.1.3, I noticed an increase from > application point of view latency from ~8ms to ~15ms when upgrading to > Cassandra. The latency includes 3 simple queries (INSERT + SELECT (PK+CK) + > UPDATE) plus application overhead. > It has been investigated in CASSANDRA-18766 to realize it is not related. > I tested to downgrade to 4.1alpha1 and the latency regression is still there > with same value. > The version 4.1.4 has the same issue. > In a graph how it looks like > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19529) Latency regression on 4.1 comparing to 4.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834941#comment-17834941 ] Brandon Williams commented on CASSANDRA-19529: -- How much latency are we talking about? It looks like it was previously about 8ms, then jumped to 13-14ms in 4.1, and after caching is back down to around 8ms. > Latency regression on 4.1 comparing to 4.0 > -- > > Key: CASSANDRA-19529 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19529 > Project: Cassandra > Issue Type: Bug > Components: Feature/Authorization >Reporter: Nicolas Henneaux >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Attachments: screenshot-1.png, screenshot-2.png > > > When upgrading from Cassandra 4.0.10 to 4.1.3, I noticed an increase from > application point of view latency from ~8ms to ~15ms when upgrading to > Cassandra. The latency includes 3 simple queries (INSERT + SELECT (PK+CK) + > UPDATE) plus application overhead. > It has been investigated in CASSANDRA-18766 to realize it is not related. > I tested to downgrade to 4.1alpha1 and the latency regression is still there > with same value. > The version 4.1.4 has the same issue. > In a graph how it looks like > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19529) Latency regression on 4.1 comparing to 4.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834939#comment-17834939 ] Nicolas Henneaux commented on CASSANDRA-19529: -- I would expect the cache to moderate it but it seems not enough. > Latency regression on 4.1 comparing to 4.0 > -- > > Key: CASSANDRA-19529 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19529 > Project: Cassandra > Issue Type: Bug > Components: Feature/Authorization >Reporter: Nicolas Henneaux >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Attachments: screenshot-1.png, screenshot-2.png > > > When upgrading from Cassandra 4.0.10 to 4.1.3, I noticed an increase from > application point of view latency from ~8ms to ~15ms when upgrading to > Cassandra. The latency includes 3 simple queries (INSERT + SELECT (PK+CK) + > UPDATE) plus application overhead. > It has been investigated in CASSANDRA-18766 to realize it is not related. > I tested to downgrade to 4.1alpha1 and the latency regression is still there > with same value. > The version 4.1.4 has the same issue. > In a graph how it looks like > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19529) Latency regression on 4.1 comparing to 4.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834938#comment-17834938 ] Brandon Williams commented on CASSANDRA-19529: -- bq. there is still an impact in term on latency when auth_read_consistency_level is not set to LOCAL_ONE. A latency impact when using a greater CL should be expected though, no? > Latency regression on 4.1 comparing to 4.0 > -- > > Key: CASSANDRA-19529 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19529 > Project: Cassandra > Issue Type: Bug > Components: Feature/Authorization >Reporter: Nicolas Henneaux >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Attachments: screenshot-1.png, screenshot-2.png > > > When upgrading from Cassandra 4.0.10 to 4.1.3, I noticed an increase from > application point of view latency from ~8ms to ~15ms when upgrading to > Cassandra. The latency includes 3 simple queries (INSERT + SELECT (PK+CK) + > UPDATE) plus application overhead. > It has been investigated in CASSANDRA-18766 to realize it is not related. > I tested to downgrade to 4.1alpha1 and the latency regression is still there > with same value. > The version 4.1.4 has the same issue. > In a graph how it looks like > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-19529) Latency regression on 4.1 comparing to 4.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicolas Henneaux updated CASSANDRA-19529: - Attachment: screenshot-2.png > Latency regression on 4.1 comparing to 4.0 > -- > > Key: CASSANDRA-19529 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19529 > Project: Cassandra > Issue Type: Bug > Components: Feature/Authorization >Reporter: Nicolas Henneaux >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Attachments: screenshot-1.png, screenshot-2.png > > > When upgrading from Cassandra 4.0.10 to 4.1.3, I noticed an increase from > application point of view latency from ~8ms to ~15ms when upgrading to > Cassandra. The latency includes 3 simple queries (INSERT + SELECT (PK+CK) + > UPDATE) plus application overhead. > It has been investigated in CASSANDRA-18766 to realize it is not related. > I tested to downgrade to 4.1alpha1 and the latency regression is still there > with same value. > The version 4.1.4 has the same issue. > In a graph how it looks like > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19529) Latency regression on 4.1 comparing to 4.0
[ https://issues.apache.org/jira/browse/CASSANDRA-19529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834937#comment-17834937 ] Nicolas Henneaux commented on CASSANDRA-19529: -- Indeed, I have tried to play a bit with roles_*, permissions_* and credentials_*. I was using default values for all (2s). I tried to set validity of 1d and update of 1h for all. It is better like that but there is still an impact in term on latency when auth_read_consistency_level is not set to LOCAL_ONE. !screenshot-2.png! > Latency regression on 4.1 comparing to 4.0 > -- > > Key: CASSANDRA-19529 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19529 > Project: Cassandra > Issue Type: Bug > Components: Feature/Authorization >Reporter: Nicolas Henneaux >Priority: Normal > Fix For: 4.1.x, 5.0.x, 5.x > > Attachments: screenshot-1.png, screenshot-2.png > > > When upgrading from Cassandra 4.0.10 to 4.1.3, I noticed an increase from > application point of view latency from ~8ms to ~15ms when upgrading to > Cassandra. The latency includes 3 simple queries (INSERT + SELECT (PK+CK) + > UPDATE) plus application overhead. > It has been investigated in CASSANDRA-18766 to realize it is not related. > I tested to downgrade to 4.1alpha1 and the latency regression is still there > with same value. > The version 4.1.4 has the same issue. > In a graph how it looks like > !screenshot-1.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-19537) Unicode Code Points incorrectly sized in protocol response
[ https://issues.apache.org/jira/browse/CASSANDRA-19537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834928#comment-17834928 ] Brandon Williams commented on CASSANDRA-19537: -- [~yifanc] do you have any thoughts here? > Unicode Code Points incorrectly sized in protocol response > -- > > Key: CASSANDRA-19537 > URL: https://issues.apache.org/jira/browse/CASSANDRA-19537 > Project: Cassandra > Issue Type: Bug > Components: CQL/Interpreter >Reporter: Andrew Hogg >Priority: Normal > Time Spent: 10m > Remaining Estimate: 0h > > Within a query, we have sent in a character which is \U0010 - the highest > permissible unicode character point. This is encoded in UTF-8 using 4 bytes > and sent. When the query issues a warning in the response (such as a > tombstone warning which includes the query sent), the warning string in the > protocol is specified as a short , followed by the string. > > CBUtil.WriteString gets the length using the following code: > {code:java} > int length = TypeSizes.encodedUTF8Length(str);{code} > This in turn gets the length of the string based on a calculation: > {noformat} > public static int encodedUTF8Length(String st) > { > int strlen = st.length(); > int utflen = 0; > for (int i = 0; i < strlen; i++) > { > int c = st.charAt(i); > if ((c >= 0x0001) && (c <= 0x007F)) > utflen++; > else if (c > 0x07FF) > utflen += 3; > else > utflen += 2; > } > return utflen; > }{noformat} > The use of the st.length within this function causes problems - its > considering the string as utf-16, so the 4 byte UTF-8 value is treated as a 2 > character utf-16 value, both of which are high values and considered to be 3 > bytes in length each, making a total length of 6 bytes. > > Using some test code: > {noformat} > import java.nio.charset.StandardCharsets; > byte[] utf8Bytes = {(byte)244, (byte)143, (byte)191, (byte)191}; > var st = new String(utf8Bytes, StandardCharsets.UTF_8); > System.out.println(st); > int strlen = st.length(); > System.out.println(strlen); > int utflen = 0; > for (int i = 0; i < strlen; i++) > { > int c = st.charAt(i); > if ((c >= 0x0001) && (c <= 0x007F)) > utflen++; > else if (c > 0x07FF) { > utflen += 3; > } > else > utflen += 2; > } > System.out.println(utflen); > byte[] utf8Bytes = st.getBytes(StandardCharsets.UTF_8); > for (byte b : utf8Bytes) { > System.out.print(b & 0xFF); > System.out.printf(" "); > } > {noformat} > The 4 byte UTF-8, is seen by st.length as 2, which then considered the value > of each utf-16 as 56319 and 57343 respectively, and since this is above the > 2047 (0x07FF), adds 3 to the length each time. > The response message at a byte level does correctly return the UTF-8 > character in as 244 143 191 191, but the incorrect length results in a buffer > overread, which offsets the following reads, resulting in a few different > possible errors, but all relating to misalignment of the buffer read vs > expected value at that point in the buffer. > > Issue specifically found in 4.1, but appears to have existed for a while - > and is specifically due to operating outside of the UTF-16 BMP range but into > the higher planes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org