[jira] [Commented] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to default c
[ https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656461#comment-17656461 ] Stefan Miklosovic commented on CASSANDRA-12525: --- [~xgerman42] I was thinking about this a little bit more lately and I will try to jump in to fill the gaps, if you do not mind. I have to admit that it might be little bit off-putting to jump through all these hurdles suddenly at once. Also, maybe we realize that, for some reason, the steps I suggested are not entirely correct (might happen, right!?) and we would just kill more time on this than necessary. > When adding new nodes to a cluster which has authentication enabled, we end > up losing cassandra user's current crendentials and they get reverted back to > default cassandra/cassandra crendetials > - > > Key: CASSANDRA-12525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12525 > Project: Cassandra > Issue Type: Bug > Components: Cluster/Schema, Local/Config >Reporter: Atin Sood >Assignee: German Eichberger >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x > > Time Spent: 2h 40m > Remaining Estimate: 0h > > Made the following observation: > When adding new nodes to an existing C* cluster with authentication enabled > we end up loosing password information about `cassandra` user. > Initial Setup > - Create a 5 node cluster with system_auth having RF=5 and > NetworkTopologyStrategy > - Enable PasswordAuthenticator on this cluster and update the password for > 'cassandra' user to say 'password' via the alter query > - Make sure you run nodetool repair on all the nodes > Test case > - Now go ahead and add 5 more nodes to this cluster. > - Run nodetool repair on all the 10 nodes now > - Decommission the original 5 nodes such that only the new 5 nodes are in the > cluster now > - Run cqlsh and try to connect to this cluster using old user name and > password, cassandra/password > I was unable to connect to the nodes with the original credentials and was > only able to connect using the default cassandra/cassandra credentials > From the conversation over IIRC > `beobal: sood: that definitely shouldn't happen. The new nodes should only > create the default superuser role if there are 0 roles currently defined > (including that default one)` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656433#comment-17656433 ] maxwellguo edited comment on CASSANDRA-18061 at 1/10/23 5:57 AM: - Thanks [~smiklosovic] , New test with 3.x was update now . and java8 precommit jvm-upgrade-dtest is green : https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/362/workflows/e9b6a349-3d7d-4b50-9018-e407e584be8f commit change : https://github.com/apache/cassandra/pull/2047/files#diff-ad515f26e664fe1525daf7c6388fe1a66a853bd389c350cd39c76aed8e121d43R47 And I check the git history of SystemKeyspace.java , some system table also add new regular column (not primary key).,But SystemKeyspaceMigrator is not added . But if the system column type change I think SystemKeyspaceMigiration is still need,For this case Only a new column is added , and if the column is selected and null is returned ,and for nodetool compactionhistory if older version do not have this column , UNKNOW will return ,I think the backward compatibility has been done. And the ut test/distributed/org/apache/cassandra/distributed/upgrade/CompactionHistorySystemTableUpgradeTest.java Have already test the case of the backward compatibility [~brandon.williams][~mck] [~jlewandowski] I saw SystemKeyspaceMigrator41 was added by you . And it seems [~aweisberg] have add many columns to system keyspace tables . So can you help with this too ? I just add a new regular column to compaction_history system table. So should it doing some migration for system keyspace data just like some system keyspace in SystemKeyspaceMigrator41 ?I think there is no need but should handle backward compatibility was (Author: maxwellguo): [~smiklosovic] New test with 3.x was update now . and java8 precommit jvm-upgrade-dtest is green : https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/362/workflows/e9b6a349-3d7d-4b50-9018-e407e584be8f commit change : https://github.com/apache/cassandra/pull/2047/files#diff-ad515f26e664fe1525daf7c6388fe1a66a853bd389c350cd39c76aed8e121d43R47 And I check the git history of SystemKeyspace.java , some system table also add new regular column (not primary key).,But SystemKeyspaceMigrator is not added . But if the system column type change I think SystemKeyspaceMigiration is still need,For this case Only a new column is added , and if the column is selected and null is returned ,and for nodetool compactionhistory if older version do not have this column , UNKNOW will return ,I think the backward compatibility has been done. [~brandon.williams][~mck] [~jlewandowski] I saw SystemKeyspaceMigrator41 was added by you . And it seems [~aweisberg] have add many columns to system keyspace tables . So can you help with this too ? I just add a new regular column to compaction_history system table. So should it doing some migration for system keyspace data just like some system keyspace in SystemKeyspaceMigrator41 ?I think there is no need but should handle backward compatibility > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or nodetool compactionhistory to see whether > we have made a major compact for this node .:) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656433#comment-17656433 ] maxwellguo commented on CASSANDRA-18061: [~smiklosovic] New test with 3.x was update now . and java8 precommit jvm-upgrade-dtest is green : https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/362/workflows/e9b6a349-3d7d-4b50-9018-e407e584be8f commit change : https://github.com/apache/cassandra/pull/2047/files#diff-ad515f26e664fe1525daf7c6388fe1a66a853bd389c350cd39c76aed8e121d43R47 And I check the git history of SystemKeyspace.java , some system table also add new regular column (not primary key).,But SystemKeyspaceMigrator is not added . But if the system column type change I think SystemKeyspaceMigiration is still need,For this case Only a new column is added , and if the column is selected and null is returned ,and for nodetool compactionhistory if older version do not have this column , UNKNOW will return ,I think the backward compatibility has been done. [~brandon.williams][~mck] [~jlewandowski] I saw SystemKeyspaceMigrator41 was added by you . And it seems [~aweisberg] have add many columns to system keyspace tables . So can you help with this too ? I just add a new regular column to compaction_history system table. So should it doing some migration for system keyspace data just like some system keyspace in SystemKeyspaceMigrator41 ?I think there is no need but should handle backward compatibility > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or nodetool compactionhistory to see whether > we have made a major compact for this node .:) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14013) Data loss in snapshots keyspace after service restart
[ https://issues.apache.org/jira/browse/CASSANDRA-14013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656359#comment-17656359 ] Paulo Motta commented on CASSANDRA-14013: - Good catch! I've added two test cases on {{DescriptorTest}} for non .db files [on this commit|https://github.com/pauloricardomg/cassandra/commit/d5232cbc225b7d7d7b1adf67bd819dfea0d00b79]. I've incorporated [your commit|https://github.com/pauloricardomg/cassandra/pull/2/commits/d5eb3b69bb4d7262fd19368082dbd466b77e7b90] + the test change [above|https://github.com/pauloricardomg/cassandra/commit/d5232cbc225b7d7d7b1adf67bd819dfea0d00b79] into the trunk branch, rebased and resubmitted CI for all branches: |branch||CI|| |[CASSANDRA-14013-4.0|https://github.com/pauloricardomg/cassandra/tree/CASSANDRA-14013-4.0]|[#2171|https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2171/] (running)| |[CASSANDRA-14013-4.1|https://github.com/pauloricardomg/cassandra/tree/CASSANDRA-14013-4.1]|[#2170|https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2170/] (running)| |[CASSANDRA-14013-trunk|https://github.com/pauloricardomg/cassandra/tree/CASSANDRA-14013-trunk]|[#2169|https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2169/] (running)| After CI looks good for all branches this should be good to go from my side. > Data loss in snapshots keyspace after service restart > - > > Key: CASSANDRA-14013 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14013 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core, Local/Snapshots >Reporter: Gregor Uhlenheuer >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 4.0.x, 4.1.x, 4.x > > Time Spent: 10m > Remaining Estimate: 0h > > I am posting this bug in hope to discover the stupid mistake I am doing > because I can't imagine a reasonable answer for the behavior I see right now > :-) > In short words, I do observe data loss in a keyspace called *snapshots* after > restarting the Cassandra service. Say I do have 1000 records in a table > called *snapshots.test_idx* then after restart the table has less entries or > is even empty. > My kind of "mysterious" observation is that it happens only in a keyspace > called *snapshots*... > h3. Steps to reproduce > These steps to reproduce show the described behavior in "most" attempts (not > every single time though). > {code} > # create keyspace > CREATE KEYSPACE snapshots WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > # create table > CREATE TABLE snapshots.test_idx (key text, seqno bigint, primary key(key)); > # insert some test data > INSERT INTO snapshots.test_idx (key,seqno) values ('key1', 1); > ... > INSERT INTO snapshots.test_idx (key,seqno) values ('key1000', 1000); > # count entries > SELECT count(*) FROM snapshots.test_idx; > 1000 > # restart service > kill > cassandra -f > # count entries > SELECT count(*) FROM snapshots.test_idx; > 0 > {code} > I hope someone can point me to the obvious mistake I am doing :-) > This happened to me using both Cassandra 3.9 and 3.11.0 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-17869) Add JDK17 option to cassandra-builds (build-scripts and jenkins dsl) and on jenkins agents
[ https://issues.apache.org/jira/browse/CASSANDRA-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-17869: - Reviewers: Brandon Williams > Add JDK17 option to cassandra-builds (build-scripts and jenkins dsl) and on > jenkins agents > -- > > Key: CASSANDRA-17869 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17869 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Michael Semb Wever >Assignee: Michael Semb Wever >Priority: Normal > > Add JDK17 option to cassandra-builds build-scripts, they only currently > support options {{8}} and {{1}}. > Add JDK17 to the matrix axes in the jenkins dsl. > Ensure JDK17 is installed on all the jenkins agents. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-17869) Add JDK17 option to cassandra-builds (build-scripts and jenkins dsl) and on jenkins agents
[ https://issues.apache.org/jira/browse/CASSANDRA-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656329#comment-17656329 ] Michael Semb Wever commented on CASSANDRA-17869: Patch is ready for review: https://github.com/apache/cassandra-builds/compare/trunk...thelastpickle:cassandra-builds:mck/17869 ({{prepare_release.sh}} i need to finish) > Add JDK17 option to cassandra-builds (build-scripts and jenkins dsl) and on > jenkins agents > -- > > Key: CASSANDRA-17869 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17869 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Michael Semb Wever >Assignee: Michael Semb Wever >Priority: Normal > > Add JDK17 option to cassandra-builds build-scripts, they only currently > support options {{8}} and {{1}}. > Add JDK17 to the matrix axes in the jenkins dsl. > Ensure JDK17 is installed on all the jenkins agents. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-17869) Add JDK17 option to cassandra-builds (build-scripts and jenkins dsl) and on jenkins agents
[ https://issues.apache.org/jira/browse/CASSANDRA-17869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656328#comment-17656328 ] Michael Semb Wever commented on CASSANDRA-17869: I need this to go in before CASSANDRA-18133, because the compatibility required has to work in both directions between both git repos. It makes sense to start first and isolate the cassandra-builds patch, making it forward-compatible to the in-tree patch in 18133. > Add JDK17 option to cassandra-builds (build-scripts and jenkins dsl) and on > jenkins agents > -- > > Key: CASSANDRA-17869 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17869 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Michael Semb Wever >Assignee: Michael Semb Wever >Priority: Normal > > Add JDK17 option to cassandra-builds build-scripts, they only currently > support options {{8}} and {{1}}. > Add JDK17 to the matrix axes in the jenkins dsl. > Ensure JDK17 is installed on all the jenkins agents. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656322#comment-17656322 ] Michael Semb Wever commented on CASSANDRA-18061: bq. Am I correct we need to support 4.0 -> 5.0 upgrade so we need to migrate compaction_history to a new table? Yes. All `(N-1).x.y` to `N.xx.yy` upgrade paths must be supported. That also means that if the patch doesn't support upgrades from 3.x it cannot be committed while build.xml is at 4.2 (you would then change the fixVersion to 5.x and wait til build.xml is bumped, or enhance the patch to support 3.x upgrades) > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or nodetool compactionhistory to see whether > we have made a major compact for this node .:) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18139) Revert changes to units output in FileUtils#stringifyFileSize post CASSANDRA-15234
[ https://issues.apache.org/jira/browse/CASSANDRA-18139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656316#comment-17656316 ] Ekaterina Dimitrova commented on CASSANDRA-18139: - I agree with your points. On the other hand after CASSANDRA-18139 I made some effort to get back and revert other units changes in nodetool output with the support of other community members, FileUtils#stringifyFileSize was an honest miss so if we revert in theory we should be consistent with the current nodetool state which should still have the old format of units in output. The reason I suggest a flag is for people who might have been using it in 4.1.0 already so they can switch to it at least. On the other hand flag or not we will break those people in a patch release. Not sure how many are those with new clusters using already that output though... considering we just released before Christmas. I am wondering if it makes sense to hit the dev and/or user list to gather some more feedback from people before moving forward one way or another. > Revert changes to units output in FileUtils#stringifyFileSize post > CASSANDRA-15234 > -- > > Key: CASSANDRA-18139 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18139 > Project: Cassandra > Issue Type: Bug >Reporter: Ekaterina Dimitrova >Priority: Normal > > As discussed in CASSANDRA-15234, FileUtils#stringifyFileSize is used in > nodetool output which can break people parsing the nodetool output -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra] branch cep-15-accord updated: Ninja: Add AccordTestUtils.parse which was missing in the latest commit
This is an automated email from the ASF dual-hosted git repository. dcapwell pushed a commit to branch cep-15-accord in repository https://gitbox.apache.org/repos/asf/cassandra.git The following commit(s) were added to refs/heads/cep-15-accord by this push: new 62f895adcf Ninja: Add AccordTestUtils.parse which was missing in the latest commit 62f895adcf is described below commit 62f895adcf2472e0bc3cef433d58c18106663dea Author: David Capwell AuthorDate: Mon Jan 9 13:20:58 2023 -0800 Ninja: Add AccordTestUtils.parse which was missing in the latest commit --- .../unit/org/apache/cassandra/service/accord/AccordTestUtils.java | 8 1 file changed, 8 insertions(+) diff --git a/test/unit/org/apache/cassandra/service/accord/AccordTestUtils.java b/test/unit/org/apache/cassandra/service/accord/AccordTestUtils.java index 4adad32d8a..20142c439b 100644 --- a/test/unit/org/apache/cassandra/service/accord/AccordTestUtils.java +++ b/test/unit/org/apache/cassandra/service/accord/AccordTestUtils.java @@ -174,6 +174,14 @@ public class AccordTestUtils return statement.createTxn(ClientState.forInternalCalls(), options); } +public static TransactionStatement parse(String query) +{ +TransactionStatement.Parsed parsed = (TransactionStatement.Parsed) QueryProcessor.parseStatement(query); +Assert.assertNotNull(parsed); +TransactionStatement statement = (TransactionStatement) parsed.prepare(ClientState.forInternalCalls()); +return statement; +} + public static Txn createTxn(int readKey, int... writeKeys) { StringBuilder sb = new StringBuilder("BEGIN TRANSACTION\n"); - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[cassandra-accord] branch trunk updated: fix java8 build (#25)
This is an automated email from the ASF dual-hosted git repository. benedict pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/cassandra-accord.git The following commit(s) were added to refs/heads/trunk by this push: new 63c37e2 fix java8 build (#25) 63c37e2 is described below commit 63c37e20cfe66a421c1b07ba1f430a9e6aabe4c5 Author: Benedict Elliott Smith AuthorDate: Mon Jan 9 21:31:40 2023 + fix java8 build (#25) --- .../main/java/accord/impl/InMemoryCommandStore.java | 4 ++-- .../src/main/java/accord/impl/SimpleProgressLog.java | 19 +-- .../src/test/java/accord/local/CommandTest.java | 3 ++- 3 files changed, 13 insertions(+), 13 deletions(-) diff --git a/accord-core/src/main/java/accord/impl/InMemoryCommandStore.java b/accord-core/src/main/java/accord/impl/InMemoryCommandStore.java index ce4562d..ad08d14 100644 --- a/accord-core/src/main/java/accord/impl/InMemoryCommandStore.java +++ b/accord-core/src/main/java/accord/impl/InMemoryCommandStore.java @@ -18,14 +18,14 @@ package accord.impl; +import accord.local.CommandStore; // java8 fails compilation if this is in correct position +import accord.local.SyncCommandStores.SyncCommandStore; // java8 fails compilation if this is in correct position import accord.api.Agent; import accord.api.DataStore; import accord.api.Key; import accord.api.ProgressLog; import accord.impl.InMemoryCommandStore.SingleThread.AsyncState; import accord.impl.InMemoryCommandStore.Synchronized.SynchronizedState; -import accord.local.CommandStore; // java8 fails compilation if this is in correct position -import accord.local.SyncCommandStores.SyncCommandStore; // java8 fails compilation if this is in correct position import accord.local.Command; import accord.local.CommandStore.RangesForEpoch; import accord.local.CommandsForKey; diff --git a/accord-core/src/main/java/accord/impl/SimpleProgressLog.java b/accord-core/src/main/java/accord/impl/SimpleProgressLog.java index f295235..afeb85e 100644 --- a/accord-core/src/main/java/accord/impl/SimpleProgressLog.java +++ b/accord-core/src/main/java/accord/impl/SimpleProgressLog.java @@ -29,10 +29,11 @@ import java.util.function.BiConsumer; import javax.annotation.Nullable; +import accord.utils.IntrusiveLinkedList; +import accord.utils.IntrusiveLinkedListNode; import accord.api.ProgressLog; import accord.api.RoutingKey; import accord.coordinate.*; -import accord.impl.SimpleProgressLog.Instance.State.Monitoring; import accord.local.*; import accord.local.Node.Id; import accord.local.Status.Known; @@ -41,8 +42,6 @@ import accord.messages.InformDurable; import accord.messages.SimpleReply; import accord.primitives.*; import accord.topology.Topologies; -import accord.utils.IntrusiveLinkedList; -import accord.utils.IntrusiveLinkedListNode; import accord.utils.Invariants; import org.apache.cassandra.utils.concurrent.Future; @@ -94,7 +93,7 @@ public class SimpleProgressLog implements ProgressLog.Factory this.node = node; } -class Instance extends IntrusiveLinkedList implements ProgressLog, Runnable +class Instance extends IntrusiveLinkedList implements ProgressLog, Runnable { class State { @@ -272,7 +271,7 @@ public class SimpleProgressLog implements ProgressLog.Factory } // exists only on home shard -class DisseminateState extends Monitoring +class DisseminateState extends State.Monitoring { class CoordinateAwareness implements Callback { @@ -436,7 +435,7 @@ public class SimpleProgressLog implements ProgressLog.Factory } } -class BlockingState extends Monitoring +class BlockingState extends State.Monitoring { Known blockedUntil = Nothing; @@ -524,7 +523,7 @@ public class SimpleProgressLog implements ProgressLog.Factory } } -class NonHomeState extends Monitoring +class NonHomeState extends State.Monitoring { NonHomeState() { @@ -778,14 +777,14 @@ public class SimpleProgressLog implements ProgressLog.Factory } @Override -public void addFirst(Monitoring add) +public void addFirst(State.Monitoring add) { super.addFirst(add); ensureScheduled(); } @Override -public void addLast(Monitoring add) +public void addLast(State.Monitoring add) { throw new UnsupportedOperationException(); } @@ -805,7 +804,7 @@ public class SimpleProgressLog implements ProgressLog.Factory isScheduled = false; try { -for (Monitoring run : this) +for (State.Monitoring run : this) { if (r
[jira] [Commented] (CASSANDRA-18121) Dtests need python 3.11 support
[ https://issues.apache.org/jira/browse/CASSANDRA-18121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656296#comment-17656296 ] Brandon Williams commented on CASSANDRA-18121: -- If I [reduce|https://github.com/driftx/cassandra-dtest/commit/cde46bb7369098232da15ef9c7377f1aade66d61] the number of rows the test is doing by 10x, it [still fails|https://app.circleci.com/pipelines/github/driftx/cassandra/744/workflows/ed8880f6-27ea-4c51-84e5-9bbd97fb0715/jobs/9032] on medium where [other python versions pass|https://app.circleci.com/pipelines/github/driftx/cassandra/744/workflows/ed8880f6-27ea-4c51-84e5-9bbd97fb0715/jobs/9036]. > Dtests need python 3.11 support > --- > > Key: CASSANDRA-18121 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18121 > Project: Cassandra > Issue Type: Improvement > Components: Test/dtest/python >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > > In order to have cqlsh support 3.11 the dtests also need to support 3.11 so > the cqlsh dtests can be run. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18140) getsstables --show-levels JMX serialization error
[ https://issues.apache.org/jira/browse/CASSANDRA-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan West updated CASSANDRA-18140: Test and Documentation Plan: manual verification Status: Patch Available (was: Open) [https://github.com/jrwest/cassandra/tree/jwest/18140] Tests: [j11|https://app.circleci.com/pipelines/github/jrwest/cassandra/136/workflows/43420c29-1030-4629-adca-784492e481b1] [j8|https://app.circleci.com/pipelines/github/jrwest/cassandra/136/workflows/3e7bc7a1-8761-4cea-a5ea-c4afa372872c] > getsstables --show-levels JMX serialization error > - > > Key: CASSANDRA-18140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18140 > Project: Cassandra > Issue Type: Bug > Components: Tool/nodetool >Reporter: Jordan West >Assignee: Jordan West >Priority: Normal > > While the interface is compliant and tested by JMXStandardsTest the > implementation is not actually serializable: > {{java.io.NotSerializableException: > com.google.common.collect.AbstractMapBasedMultimap$AsMap}} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18119) Handle sstable metadata stats file getting a new mtime after compaction has finished
[ https://issues.apache.org/jira/browse/CASSANDRA-18119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656282#comment-17656282 ] Josh McKenzie commented on CASSANDRA-18119: --- +1 on the code change. Looks like the trunk and 4.1 ci runs have failures in repeat runs on LogTransactionTest.java; the one on 4.1 [looks like it might be related|https://app.circleci.com/pipelines/github/krummas/cassandra/846/workflows/f9ff701a-22f6-463b-b356-c37a61d24a75/jobs/7054/tests] to this change here; it now chains down to {code:java} static boolean removeUnfinishedLeftovers(Map.Entry> entry) { try(LogFile txn = LogFile.make(entry.getKey(), entry.getValue())) { logger.info("Verifying logfile transaction {}", txn); // We don't check / include the stats file timestamp on LogRecord creation / verification as that might // be modified by a race in compaction notification and then needlessly fail subsequent node starts. if (txn.verify(true)) // REVIEWER NOTE: this param leading to skipping TS calc on stats file is what we changed here{code} >From the logs on the test failure, looks like it was expecting true (files >found and deleted correctly) but received fals: {code:java} 34409 [junit-timeout] ERROR [main] 2022-12-13 15:45:01,465 LogFile.java:170 - Failed to read records for transaction log [nb_txn_compaction_1ae0fb60-7afd-11ed-bd8f-0b2a035c989d.log in build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/1, build/test/cassandra/data/TransactionLogs 34410 [junit-timeout] ERROR [main] 2022-12-13 15:45:01,465 LogTransaction.java:561 - Unexpected disk state: failed to read transaction log [nb_txn_compaction_1ae0fb60-7afd-11ed-bd8f-0b2a035c989d.log in build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/1, build/test/cassandra/da 34411 [junit-timeout] Files and contents follow:^M 34412 [junit-timeout] build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/1/nb_txn_compaction_1ae0fb60-7afd-11ed-bd8f-0b2a035c989d.log^M 34413 [junit-timeout] REMOVE:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/1/nb-0-big-,1670946301437,5][2873235910]^M 34414 [junit-timeout] REMOVE:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/2/nb-2-big-,1670946301441,5][1283732776]^M 34415 [junit-timeout] ADD:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/1/nb-1-big-,0,5][1197593494]^M 34416 [junit-timeout] ADD:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/2/nb-3-big-,0,5][1374830355]^M 34417 [junit-timeout] COMMIT:[,0,0][2613697770]^M 34418 [junit-timeout] ***This record should have been the last one in all replicas^M 34419 [junit-timeout] build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/2/nb_txn_compaction_1ae0fb60-7afd-11ed-bd8f-0b2a035c989d.log^M 34420 [junit-timeout] REMOVE:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/1/nb-0-big-,1670946301437,5][2873235910]^M 34421 [junit-timeout] REMOVE:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/2/nb-2-big-,1670946301441,5][1283732776]^M 34422 [junit-timeout] ADD:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/1/nb-1-big-,0,5][1197593494]^M 34423 [junit-timeout] ADD:[/tmp/cassandra/build/test/cassandra/data/TransactionLogsTest/mockcf39-1add51e07afd11edbd8f0b2a035c989d/2/nb-3-big-,0,5][1374830355]^M 34424 [junit-timeout] COMMIT:[,0,0][2613697770]^M 34425 [junit-timeout] ***This record should have been the last one in all replicas^M 34426 [junit-timeout] COMMIT:[,0,0][2613697770]^M 34427 [junit-timeout] ***This record should have been the last one in all replicas^M 34428 [junit-timeout] ^M {code} Not clear to me how the change in this patch could have contributed to that; might be worth multiplexing this unit test on trunk w/out the change to see if there's a pre-existing race / issue in there. The repeated leaks of the TransactionTidier from LogTransaction throughout the repeated runs also look a smidge suspicious: {code:java} 34435 [junit-timeout] ERROR [Reference-Reaper] 2022-12-13 15:45:01,466 Ref.java:237 - LEAK DETECTED: a reference (class org.apache.cassandra.db.lifecycle.LogTransaction$TransactionTidier@1776643200:[nb_txn_compaction_198e1590-7afd-11ed-bd8f-0b2a035c989d.log in /tmp/cassandra/build/test/cassandra/data/TransactionLog {code} > Handle sstable metadata stats file getting a new mtime after compaction has > finished
[jira] [Updated] (CASSANDRA-18140) getsstables --show-levels JMX serialization error
[ https://issues.apache.org/jira/browse/CASSANDRA-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan West updated CASSANDRA-18140: Bug Category: Parent values: Code(13163)Level 1 values: Bug - Unclear Impact(13164) Complexity: Normal Component/s: Tool/nodetool Discovered By: Adhoc Test Severity: Normal Status: Open (was: Triage Needed) > getsstables --show-levels JMX serialization error > - > > Key: CASSANDRA-18140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18140 > Project: Cassandra > Issue Type: Bug > Components: Tool/nodetool >Reporter: Jordan West >Assignee: Jordan West >Priority: Normal > > While the interface is compliant and tested by JMXStandardsTest the > implementation is not actually serializable: > {{java.io.NotSerializableException: > com.google.common.collect.AbstractMapBasedMultimap$AsMap}} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-18140) getsstables --show-levels JMX serialization error
Jordan West created CASSANDRA-18140: --- Summary: getsstables --show-levels JMX serialization error Key: CASSANDRA-18140 URL: https://issues.apache.org/jira/browse/CASSANDRA-18140 Project: Cassandra Issue Type: Bug Reporter: Jordan West Assignee: Jordan West While the interface is compliant and tested by JMXStandardsTest the implementation is not actually serializable: {{java.io.NotSerializableException: com.google.common.collect.AbstractMapBasedMultimap$AsMap}} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name
[ https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656239#comment-17656239 ] Stefan Miklosovic commented on CASSANDRA-14361: --- I added the test too. I added mockito-inline dependency to be able to mock static methods. build: https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2168/ > Allow SimpleSeedProvider to resolve multiple IPs per DNS name > - > > Key: CASSANDRA-14361 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14361 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Ben Bromhead >Assignee: Stefan Miklosovic >Priority: Low > Fix For: 4.x > > Time Spent: 50m > Remaining Estimate: 0h > > Currently SimpleSeedProvider can accept a comma separated string of IPs or > hostnames as the set of Cassandra seeds. hostnames are resolved via > InetAddress.getByName, which will only return the first IP associated with an > A, or CNAME record. > By changing to InetAddress.getAllByName, existing behavior is preserved, but > now Cassandra can discover multiple IP address per record, allowing seed > discovery by DNS to be a little easier. > Some examples of improved workflows with this change include: > * specify the DNS name of a headless service in Kubernetes which will > resolve to all IP addresses of pods within that service. > * seed discovery for multi-region clusters via AWS route53, AzureDNS etc > * Other common DNS service discovery mechanisms. > The only behavior this is likely to impact would be where users are relying > on the fact that getByName only returns a single IP address. > I can't imagine any scenario where that is a sane choice. Even when that > choice has been made, it only impacts the first startup of Cassandra and > would not be on any critical path. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18136) Upgrade maven-shade-plugin to fix shaded dtest JAR build
[ https://issues.apache.org/jira/browse/CASSANDRA-18136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656216#comment-17656216 ] Caleb Rackliffe commented on CASSANDRA-18136: - +1 > Upgrade maven-shade-plugin to fix shaded dtest JAR build > > > Key: CASSANDRA-18136 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18136 > Project: Cassandra > Issue Type: Bug > Components: Build, Packaging >Reporter: Abe Ratnofsky >Priority: Normal > > Could not build shaded dtest JAR with ./build-shaded-dtest-jar.sh due to: > {code:java} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project > cassandra-dtest-shaded: Error creating shaded jar: Problem shading JAR > ~/Repos/apache/cassandra/target/cassandra-dtest-shaded-4.0.1-SNAPSHOT.jar > entry net/openhft/chronicle/wire/YamlWire$TextValueIn.class: > org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class > net/openhft/chronicle/wire/YamlWire$TextValueIn.class: 65536 -> [Help 1] > {code} > > Tried on both Java 8 and Java 11, included ant clean / realclean / unlinking > the entire ~/.m2/repository. > > Fixed by upgrading maven-shade-plugin in relocate-dependencies.pom: > {code:java} > org.apache.maven.plugins > maven-shade-plugin > - 3.2.1 > + 3.4.1{code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18136) Upgrade maven-shade-plugin to fix shaded dtest JAR build
[ https://issues.apache.org/jira/browse/CASSANDRA-18136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Caleb Rackliffe updated CASSANDRA-18136: Reviewers: Caleb Rackliffe, Caleb Rackliffe Caleb Rackliffe, Caleb Rackliffe (was: Caleb Rackliffe) Status: Review In Progress (was: Patch Available) > Upgrade maven-shade-plugin to fix shaded dtest JAR build > > > Key: CASSANDRA-18136 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18136 > Project: Cassandra > Issue Type: Bug > Components: Build, Packaging >Reporter: Abe Ratnofsky >Priority: Normal > > Could not build shaded dtest JAR with ./build-shaded-dtest-jar.sh due to: > {code:java} > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-shade-plugin:3.2.1:shade (default) on project > cassandra-dtest-shaded: Error creating shaded jar: Problem shading JAR > ~/Repos/apache/cassandra/target/cassandra-dtest-shaded-4.0.1-SNAPSHOT.jar > entry net/openhft/chronicle/wire/YamlWire$TextValueIn.class: > org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class > net/openhft/chronicle/wire/YamlWire$TextValueIn.class: 65536 -> [Help 1] > {code} > > Tried on both Java 8 and Java 11, included ant clean / realclean / unlinking > the entire ~/.m2/repository. > > Fixed by upgrading maven-shade-plugin in relocate-dependencies.pom: > {code:java} > org.apache.maven.plugins > maven-shade-plugin > - 3.2.1 > + 3.4.1{code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18139) Revert changes to units output in FileUtils#stringifyFileSize post CASSANDRA-15234
[ https://issues.apache.org/jira/browse/CASSANDRA-18139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656202#comment-17656202 ] Alex Petrov commented on CASSANDRA-18139: - Not sure if it's a good argument, but it could be that there's much more tooling that predates 4.1.0. I do agree that hiding it behind a flag is potentially a way to go. I'm mostly concerned that people will be hitting this as they upgrade, which may slow down adoption, but on the other hand if we hide it behind a flag there's a chance no-one will actually use the new feature. > Revert changes to units output in FileUtils#stringifyFileSize post > CASSANDRA-15234 > -- > > Key: CASSANDRA-18139 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18139 > Project: Cassandra > Issue Type: Bug >Reporter: Ekaterina Dimitrova >Priority: Normal > > As discussed in CASSANDRA-15234, FileUtils#stringifyFileSize is used in > nodetool output which can break people parsing the nodetool output -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18121) Dtests need python 3.11 support
[ https://issues.apache.org/jira/browse/CASSANDRA-18121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656197#comment-17656197 ] Brandon Williams commented on CASSANDRA-18121: -- [Here|https://app.circleci.com/pipelines/github/driftx/cassandra/743/workflows/0ba72e6e-b895-4c53-8204-d3a5a5bdf4f5] is a run w/high resources that passes. This at least somewhat explains why everything has looked like environmental problems with the availability errors previously with medium: there weren't enough resources, despite no obvious sign besides the errors. I am still not sure why this is the case, since these tests aren't as resource hungry as many other dtests, and it doesn't seem like python would be much of a resource strain. The failing rebuild test is also strange; clearly that is a server failure, but I don't see it in butler. > Dtests need python 3.11 support > --- > > Key: CASSANDRA-18121 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18121 > Project: Cassandra > Issue Type: Improvement > Components: Test/dtest/python >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Normal > > In order to have cqlsh support 3.11 the dtests also need to support 3.11 so > the cqlsh dtests can be run. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18137) Repeatable ci-cassandra.a.o
[ https://issues.apache.org/jira/browse/CASSANDRA-18137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Semb Wever updated CASSANDRA-18137: --- Description: Goals - Reproducible reference ASF CI environment so contributors can clone it. - An accepted “test result output” format that will certify a commit regardless of CI env. - Turnaround times as fast as circleci (cloned environment scales to capacity). - Intuitive CI implementation accessible to new contributors. Existing Problems - Many unknown flakies due to infrequent failure rates and limited test history, - time-consuming to identify flakies as infra-related, test-related, code-related, - ci-cassandra.a.o is hard to debug (donated heterogenous servers around the world, ASF controlled with limited physical access, two executors per agent [noisy neighbour]), - slow turnaround times compared to circleci, also very variable times as the fixed resource pool running both post- and pre-commit CI becomes easily saturated, - difficult to pre-commit test jenkins and cassandra-build changes, - CI development efforts is split between ci-cassandra and circleci, despite ci-cassandra being our canonical and non-commercial CI, - lacking parity of what is tested between ci-cassandra and circleci - circleci is restricted to those with access to premium commercial circleci, creating classes of engineers in the community and an exclusive OSS culture, - cassandra-builds as a separate repo (without release branches matching in-tree) adds complexity to changing matrix values (jdks, pythons, dist) - mixture of jenkins dsl groovy, declarative and scripting pipeline. - different pre-commit and post-commit jenkins pipelines are used. Additional Goals - Identify all remaining test flakies. - Thin CI implementation that builds on a common set of CI-agnostic build and test scripts. Contributors are free to add/maintain other CI solutions while remaining aligned on what and how we test. - Extendable by downstream codebases (designed for re-use and extension). Proposal - Provide a jenkins k8s operator based script that with one command-line spawns a ci-cassandra.a.o clone on the k8s cluster in context, runs the Jenkinsfile pipeline, saves the test result, and tears down the ci-cassandra.a.o clone. [turnkey solution] - Parameters make spawn and tear-down optional, so ci-cassandra.a.o clones can be re-usable. - Bring build and test scripts (including their docker images) from cassandra-builds to in-tree - Provide a declarative jenkins pipeline that maps stages to CI-agnostic build and test scripts. - CI-agnostic build and test scripts can be run with docker, and without any CI, on any machine. - Branch specific testing context is defined outside of the CI code. Unknowns - with the known pipeline steps, the matrixes we desire (jdk, python, dist, arch), and the parallelisation possible, what is the fastest turnaround time we can expect, - what parameterisation to the script is required for typical developer testing pre-commit, - what is the cost of a single pipeline run, what is the expected cost of a year's post-commit CI, - what is the size of the test result and how can it be saved and shared, - what is the ideal stable agent resource specifications, can we use heterogenous environments if different test types have different minimum requirements, - is multiplexing testing in jenkins a requirement to this epic (see CASSANDRA-17932) - how does the community provide CI to contributors that cannot afford the k8s cluster costs Non Goals - deciding what to do with ci-cassandra.a.o if a ci-cassandra.a.o clone is donated and provides a more stable post-commit environment than the original ci-cassandra.a.o - any work/improvements on circleci (e.g. CASSANDRA-18001) - discussing or changing our branching and merging strategies - introduction of a develop or staging branch for final pass pre-commit CI optimisation - jira integration (automatic bot comments of test results) (see CASSANDRA-17277) - introduction/support of additional tests: code coverage, jmh benchmarking, or larger performance testing; or matrix axis (jdks, pythons, dists, etc) or other checks. (e.g. CASSANDRA-18077, CASSANDRA-18072) Timing and Priorisation - Both 4.0 and 4.1 major releases have been delayed (and significant amounts of engineering time spent) on addressing an unknown number of flakies (further exacerbated by unknown CI infra problems). - Test failures are still being caught months after commit and merge. - 5.0 promises a significant increase in large contributions, increasing the risk and cost of failing to do stable trunk development. - Downstream users are requesting closer alignment to upstream, and convergence to upstream's CI approach (including extending it for additional QA). History and background context can be read in the previous 'Cassandra CI Status' dev@ threads:
[jira] [Comment Edited] (CASSANDRA-17507) IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling upgrade
[ https://issues.apache.org/jira/browse/CASSANDRA-17507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656119#comment-17656119 ] Andres de la Peña edited comment on CASSANDRA-17507 at 1/9/23 3:49 PM: --- [This JVM dtest|https://github.com/apache/cassandra/compare/cassandra-4.0...adelapena:cassandra:17507-4.0] reproduces the bug. It testes a 3.x -> 4.0 rolling upgrade scenario with a table with {{COMPACT STORAGE}} and a query over that uses paging. The bug only seems to manifest itself when the driver uses native protocol v3, instead on the default (v5 for 4.0 and v4 for 3.11). The tests results can be found [here|https://app.circleci.com/pipelines/github/adelapena/cassandra/2536/workflows/5791569d-8ea1-42b5-bacd-bd8716afaee8/jobs/25163]. The artifacts stored for each test contain an identical stack trace, for example [this one|https://output.circle-artifacts.com/output/job/f4cbecbc-92dd-49c8-a75d-a5a7b53bcd21/artifacts/0/stdout/fails/1/org.apache.cassandra.distributed.upgrade.CompactStoragePagingTest%23testPagingWithCompactStorageAndProtocolVersion.txt] If this is actually caused by the combination of {{COMPACT STORAGE}}, paging and an old protocol version, probably the easiest workaround until we get a fix is setting the driver to use a more recent version of the native transport protocol. was (Author: adelapena): [This JVM dtest|https://github.com/apache/cassandra/compare/cassandra-4.0...adelapena:cassandra:17507-4.0] reproduces the bug. It testes a 3.x -> 4.0 rolling upgrade scenario with a table with {{COMPACT STORAGE}} and a query over that uses paging. The bug only seems to manifest itself when the driver uses native protocol v3, instead on the default (v5 for 4.0 and v4 for 3.11). The tests results can be found [here|https://app.circleci.com/pipelines/github/adelapena/cassandra/2536/workflows/5791569d-8ea1-42b5-bacd-bd8716afaee8/jobs/25163]. The artifacts stored for each test contain an identical stacktrace, for example [this one|https://output.circle-artifacts.com/output/job/f4cbecbc-92dd-49c8-a75d-a5a7b53bcd21/artifacts/0/stdout/fails/1/org.apache.cassandra.distributed.upgrade.CompactStoragePagingTest%23testPagingWithCompactStorageAndProtocolVersion.txt] If this is actually caused by the combination of {{{}COMPACT STORAGE{}}}, paging and and old protocol version, probably the easiest workaround until we get a fix is setting the driver to use a more recent version of the native transport protocol. > IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling > upgrade > --- > > Key: CASSANDRA-17507 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17507 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Coordination >Reporter: Thomas Steinmaurer >Priority: Normal > Fix For: 4.0.x > > > In a 6 node 3.11.12 test cluster - freshly set up, thus no legacy SSTables > etc. - with ~ 1TB SSTables on disk per node, I have been running a rolling > upgrade to 4.0.3. On upgraded 4.0.3 nodes I then have seen the following > exception regularly, which disappeared once all 6 nodes have been on 4.0.3. > Is this known? Can this be ignored? As said, just a test drive, but not sure > if we want to have that in production, especially with a larger number of > nodes, where it could take some time, until all are upgraded. Thanks! > {code} > ERROR [Native-Transport-Requests-8] 2022-03-30 11:30:24,057 > ErrorMessage.java:457 - Unexpected exception during request > java.lang.IllegalArgumentException: newLimit > capacity: (290 > 15) > at java.base/java.nio.Buffer.createLimitException(Buffer.java:372) > at java.base/java.nio.Buffer.limit(Buffer.java:346) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:262) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:107) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:39) > at > org.apache.cassandra.db.marshal.ValueAccessor.sliceWithShortLength(ValueAccessor.java:225) > at > org.apache.cassandra.db.marshal.CompositeType.splitName(CompositeType.java:222) > at > org.apache.cassandra.service.pager.PagingState$RowMark.decodeClustering(PagingState.java:434) > at > org.apache.cassandra.service.pager.PagingState$RowMark.clustering(PagingState.java:388) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:88) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:32) >
[jira] [Commented] (CASSANDRA-18139) Revert changes to units output in FileUtils#stringifyFileSize post CASSANDRA-15234
[ https://issues.apache.org/jira/browse/CASSANDRA-18139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656162#comment-17656162 ] Ekaterina Dimitrova commented on CASSANDRA-18139: - Maybe flags and option to print it also with the new format? It will still be a change but people will have some choice at least, maybe > Revert changes to units output in FileUtils#stringifyFileSize post > CASSANDRA-15234 > -- > > Key: CASSANDRA-18139 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18139 > Project: Cassandra > Issue Type: Bug >Reporter: Ekaterina Dimitrova >Priority: Normal > > As discussed in CASSANDRA-15234, FileUtils#stringifyFileSize is used in > nodetool output which can break people parsing the nodetool output -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18139) Revert changes to units output in FileUtils#stringifyFileSize post CASSANDRA-15234
[ https://issues.apache.org/jira/browse/CASSANDRA-18139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656160#comment-17656160 ] Ekaterina Dimitrova commented on CASSANDRA-18139: - CC [~ifesdjeen] and [~dcapwell] This seems like a regression but it made me think... if someone started using it in 4.1.0, reverting will break them in this case so I am not sure what is the correct way to handle it... Seems to me reverting/not reverting, it will be a regression for someone in any case but at the same time 4.1.0 was recently released so I expect less people maybe affected. I have to think about it. > Revert changes to units output in FileUtils#stringifyFileSize post > CASSANDRA-15234 > -- > > Key: CASSANDRA-18139 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18139 > Project: Cassandra > Issue Type: Bug >Reporter: Ekaterina Dimitrova >Priority: Normal > > As discussed in CASSANDRA-15234, FileUtils#stringifyFileSize is used in > nodetool output which can break people parsing the nodetool output -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-15234) Standardise config and JVM parameters
[ https://issues.apache.org/jira/browse/CASSANDRA-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656157#comment-17656157 ] Ekaterina Dimitrova commented on CASSANDRA-15234: - I just opened CASSANDRA-18139. We can move any discussions there. > Standardise config and JVM parameters > - > > Key: CASSANDRA-15234 > URL: https://issues.apache.org/jira/browse/CASSANDRA-15234 > Project: Cassandra > Issue Type: Bug > Components: Local/Config >Reporter: Benedict Elliott Smith >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 4.1-alpha1, 4.1 > > Attachments: CASSANDRA-15234-3-DTests-JAVA8.txt > > > We have a bunch of inconsistent names and config patterns in the codebase, > both from the yams and JVM properties. It would be nice to standardise the > naming (such as otc_ vs internode_) as well as the provision of values with > units - while maintaining perpetual backwards compatibility with the old > parameter names, of course. > For temporal units, I would propose parsing strings with suffixes of: > {{code}} > u|micros(econds?)? > ms|millis(econds?)? > s(econds?)? > m(inutes?)? > h(ours?)? > d(ays?)? > mo(nths?)? > {{code}} > For rate units, I would propose parsing any of the standard {{B/s, KiB/s, > MiB/s, GiB/s, TiB/s}}. > Perhaps for avoiding ambiguity we could not accept bauds {{bs, Mbps}} or > powers of 1000 such as {{KB/s}}, given these are regularly used for either > their old or new definition e.g. {{KiB/s}}, or we could support them and > simply log the value in bytes/s. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-18139) Revert changes to units output in FileUtils#stringifyFileSize post CASSANDRA-15234
Ekaterina Dimitrova created CASSANDRA-18139: --- Summary: Revert changes to units output in FileUtils#stringifyFileSize post CASSANDRA-15234 Key: CASSANDRA-18139 URL: https://issues.apache.org/jira/browse/CASSANDRA-18139 Project: Cassandra Issue Type: Bug Reporter: Ekaterina Dimitrova As discussed in CASSANDRA-15234, FileUtils#stringifyFileSize is used in nodetool output which can break people parsing the nodetool output -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-15234) Standardise config and JVM parameters
[ https://issues.apache.org/jira/browse/CASSANDRA-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656154#comment-17656154 ] Ekaterina Dimitrova edited comment on CASSANDRA-15234 at 1/9/23 3:31 PM: - Hi [~ifesdjeen], it seems `FileUtils#stringifyFileSize` is used for nodetool output which I missed when we were revising to revert changes after the discussions raised around CASSANDRA-17863. So yes, I agree with you. I will open a ticket to revert the change later today. Thank you for raising the issue. was (Author: e.dimitrova): Hi [~ifesdjeen], it seems `FileUtils#stringifyFileSize` is used by netstats which I missed when we were revising to revert changes after the discussions raised around CASSANDRA-17863. So yes, I agree with you. I will open a ticket to revert the change later today. Thank you for raising the issue. > Standardise config and JVM parameters > - > > Key: CASSANDRA-15234 > URL: https://issues.apache.org/jira/browse/CASSANDRA-15234 > Project: Cassandra > Issue Type: Bug > Components: Local/Config >Reporter: Benedict Elliott Smith >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 4.1-alpha1, 4.1 > > Attachments: CASSANDRA-15234-3-DTests-JAVA8.txt > > > We have a bunch of inconsistent names and config patterns in the codebase, > both from the yams and JVM properties. It would be nice to standardise the > naming (such as otc_ vs internode_) as well as the provision of values with > units - while maintaining perpetual backwards compatibility with the old > parameter names, of course. > For temporal units, I would propose parsing strings with suffixes of: > {{code}} > u|micros(econds?)? > ms|millis(econds?)? > s(econds?)? > m(inutes?)? > h(ours?)? > d(ays?)? > mo(nths?)? > {{code}} > For rate units, I would propose parsing any of the standard {{B/s, KiB/s, > MiB/s, GiB/s, TiB/s}}. > Perhaps for avoiding ambiguity we could not accept bauds {{bs, Mbps}} or > powers of 1000 such as {{KB/s}}, given these are regularly used for either > their old or new definition e.g. {{KiB/s}}, or we could support them and > simply log the value in bytes/s. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-15234) Standardise config and JVM parameters
[ https://issues.apache.org/jira/browse/CASSANDRA-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656154#comment-17656154 ] Ekaterina Dimitrova edited comment on CASSANDRA-15234 at 1/9/23 3:31 PM: - Hi [~ifesdjeen], it seems `FileUtils#stringifyFileSize` is used for nodetool output which I missed when we were revising to revert changes after the discussions raised around CASSANDRA-17683. So yes, I agree with you. I will open a ticket to revert the change later today. Thank you for raising the issue. was (Author: e.dimitrova): Hi [~ifesdjeen], it seems `FileUtils#stringifyFileSize` is used for nodetool output which I missed when we were revising to revert changes after the discussions raised around CASSANDRA-17863. So yes, I agree with you. I will open a ticket to revert the change later today. Thank you for raising the issue. > Standardise config and JVM parameters > - > > Key: CASSANDRA-15234 > URL: https://issues.apache.org/jira/browse/CASSANDRA-15234 > Project: Cassandra > Issue Type: Bug > Components: Local/Config >Reporter: Benedict Elliott Smith >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 4.1-alpha1, 4.1 > > Attachments: CASSANDRA-15234-3-DTests-JAVA8.txt > > > We have a bunch of inconsistent names and config patterns in the codebase, > both from the yams and JVM properties. It would be nice to standardise the > naming (such as otc_ vs internode_) as well as the provision of values with > units - while maintaining perpetual backwards compatibility with the old > parameter names, of course. > For temporal units, I would propose parsing strings with suffixes of: > {{code}} > u|micros(econds?)? > ms|millis(econds?)? > s(econds?)? > m(inutes?)? > h(ours?)? > d(ays?)? > mo(nths?)? > {{code}} > For rate units, I would propose parsing any of the standard {{B/s, KiB/s, > MiB/s, GiB/s, TiB/s}}. > Perhaps for avoiding ambiguity we could not accept bauds {{bs, Mbps}} or > powers of 1000 such as {{KB/s}}, given these are regularly used for either > their old or new definition e.g. {{KiB/s}}, or we could support them and > simply log the value in bytes/s. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-15234) Standardise config and JVM parameters
[ https://issues.apache.org/jira/browse/CASSANDRA-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656154#comment-17656154 ] Ekaterina Dimitrova commented on CASSANDRA-15234: - Hi [~ifesdjeen], it seems `FileUtils#stringifyFileSize` is used by netstats which I missed when we were revising to revert changes after the discussions raised around CASSANDRA-17863. So yes, I agree with you. I will open a ticket to revert the change later today. Thank you for raising the issue. > Standardise config and JVM parameters > - > > Key: CASSANDRA-15234 > URL: https://issues.apache.org/jira/browse/CASSANDRA-15234 > Project: Cassandra > Issue Type: Bug > Components: Local/Config >Reporter: Benedict Elliott Smith >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 4.1-alpha1, 4.1 > > Attachments: CASSANDRA-15234-3-DTests-JAVA8.txt > > > We have a bunch of inconsistent names and config patterns in the codebase, > both from the yams and JVM properties. It would be nice to standardise the > naming (such as otc_ vs internode_) as well as the provision of values with > units - while maintaining perpetual backwards compatibility with the old > parameter names, of course. > For temporal units, I would propose parsing strings with suffixes of: > {{code}} > u|micros(econds?)? > ms|millis(econds?)? > s(econds?)? > m(inutes?)? > h(ours?)? > d(ays?)? > mo(nths?)? > {{code}} > For rate units, I would propose parsing any of the standard {{B/s, KiB/s, > MiB/s, GiB/s, TiB/s}}. > Perhaps for avoiding ambiguity we could not accept bauds {{bs, Mbps}} or > powers of 1000 such as {{KB/s}}, given these are regularly used for either > their old or new definition e.g. {{KiB/s}}, or we could support them and > simply log the value in bytes/s. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-17507) IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling upgrade
[ https://issues.apache.org/jira/browse/CASSANDRA-17507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656119#comment-17656119 ] Andres de la Peña edited comment on CASSANDRA-17507 at 1/9/23 3:17 PM: --- [This JVM dtest|https://github.com/apache/cassandra/compare/cassandra-4.0...adelapena:cassandra:17507-4.0] reproduces the bug. It testes a 3.x -> 4.0 rolling upgrade scenario with a table with {{COMPACT STORAGE}} and a query over that uses paging. The bug only seems to manifest itself when the driver uses native protocol v3, instead on the default (v5 for 4.0 and v4 for 3.11). The tests results can be found [here|https://app.circleci.com/pipelines/github/adelapena/cassandra/2536/workflows/5791569d-8ea1-42b5-bacd-bd8716afaee8/jobs/25163]. The artifacts stored for each test contain an identical stacktrace, for example [this one|https://output.circle-artifacts.com/output/job/f4cbecbc-92dd-49c8-a75d-a5a7b53bcd21/artifacts/0/stdout/fails/1/org.apache.cassandra.distributed.upgrade.CompactStoragePagingTest%23testPagingWithCompactStorageAndProtocolVersion.txt] If this is actually caused by the combination of {{{}COMPACT STORAGE{}}}, paging and and old protocol version, probably the easiest workaround until we get a fix is setting the driver to use a more recent version of the native transport protocol. was (Author: adelapena): [This JVM dtest|https://github.com/apache/cassandra/compare/trunk...adelapena:cassandra:17507-4.0?expand=1] reproduces the bug. It testes a 3.x -> 4.0 rolling upgrade scenario with a table with {{COMPACT STORAGE}} and a query over that uses paging. The bug only seems to manifest itself when the driver uses native protocol v3, instead on the default (v5 for 4.0 and v4 for 3.11). The tests results can be found [here|https://app.circleci.com/pipelines/github/adelapena/cassandra/2536/workflows/5791569d-8ea1-42b5-bacd-bd8716afaee8/jobs/25163]. The artifacts stored for each test contain an identical stacktrace, for example [this one|https://output.circle-artifacts.com/output/job/f4cbecbc-92dd-49c8-a75d-a5a7b53bcd21/artifacts/0/stdout/fails/1/org.apache.cassandra.distributed.upgrade.CompactStoragePagingTest%23testPagingWithCompactStorageAndProtocolVersion.txt] If this is actually caused by the combination of {{{}COMPACT STORAGE{}}}, paging and and old protocol version, probably the easiest workaround until we get a fix is setting the driver to use a more recent version of the native transport protocol. > IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling > upgrade > --- > > Key: CASSANDRA-17507 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17507 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Coordination >Reporter: Thomas Steinmaurer >Priority: Normal > Fix For: 4.0.x > > > In a 6 node 3.11.12 test cluster - freshly set up, thus no legacy SSTables > etc. - with ~ 1TB SSTables on disk per node, I have been running a rolling > upgrade to 4.0.3. On upgraded 4.0.3 nodes I then have seen the following > exception regularly, which disappeared once all 6 nodes have been on 4.0.3. > Is this known? Can this be ignored? As said, just a test drive, but not sure > if we want to have that in production, especially with a larger number of > nodes, where it could take some time, until all are upgraded. Thanks! > {code} > ERROR [Native-Transport-Requests-8] 2022-03-30 11:30:24,057 > ErrorMessage.java:457 - Unexpected exception during request > java.lang.IllegalArgumentException: newLimit > capacity: (290 > 15) > at java.base/java.nio.Buffer.createLimitException(Buffer.java:372) > at java.base/java.nio.Buffer.limit(Buffer.java:346) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:262) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:107) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:39) > at > org.apache.cassandra.db.marshal.ValueAccessor.sliceWithShortLength(ValueAccessor.java:225) > at > org.apache.cassandra.db.marshal.CompositeType.splitName(CompositeType.java:222) > at > org.apache.cassandra.service.pager.PagingState$RowMark.decodeClustering(PagingState.java:434) > at > org.apache.cassandra.service.pager.PagingState$RowMark.clustering(PagingState.java:388) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:88) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:32) >
[jira] [Comment Edited] (CASSANDRA-17507) IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling upgrade
[ https://issues.apache.org/jira/browse/CASSANDRA-17507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656119#comment-17656119 ] Andres de la Peña edited comment on CASSANDRA-17507 at 1/9/23 3:15 PM: --- [This JVM dtest|https://github.com/apache/cassandra/compare/trunk...adelapena:cassandra:17507-4.0?expand=1] reproduces the bug. It testes a 3.x -> 4.0 rolling upgrade scenario with a table with {{COMPACT STORAGE}} and a query over that uses paging. The bug only seems to manifest itself when the driver uses native protocol v3, instead on the default (v5 for 4.0 and v4 for 3.11). The tests results can be found [here|https://app.circleci.com/pipelines/github/adelapena/cassandra/2536/workflows/5791569d-8ea1-42b5-bacd-bd8716afaee8/jobs/25163]. The artifacts stored for each test contain an identical stacktrace, for example [this one|https://output.circle-artifacts.com/output/job/f4cbecbc-92dd-49c8-a75d-a5a7b53bcd21/artifacts/0/stdout/fails/1/org.apache.cassandra.distributed.upgrade.CompactStoragePagingTest%23testPagingWithCompactStorageAndProtocolVersion.txt] If this is actually caused by the combination of {{{}COMPACT STORAGE{}}}, paging and and old protocol version, probably the easiest workaround until we get a fix is setting the driver to use a more recent version of the native transport protocol. was (Author: adelapena): [This JVM dtest|https://github.com/apache/cassandra/compare/trunk...adelapena:cassandra:17507-4.0?expand=1] reproduces the bug. It testes a 3.x -> 4.0 rolling upgrade scenario with a table with {{COMPACT STORAGE}} and a query over that uses paging. The bug only seems to manifest itself when the driver uses native protocol v3, instead on the default (v5 for 4.0 and v4 for 3.11). The tests results can be found [here|https://app.circleci.com/pipelines/github/adelapena/cassandra/2536/workflows/5791569d-8ea1-42b5-bacd-bd8716afaee8/jobs/25163]. The artifacts stored for each test contain an identical stacktrace, for example [this one|https://output.circle-artifacts.com/output/job/f4cbecbc-92dd-49c8-a75d-a5a7b53bcd21/artifacts/0/stdout/fails/1/org.apache.cassandra.distributed.upgrade.CompactStoragePagingTest%23testPagingWithCompactStorageAndProtocolVersion.txt] If this actually is caused by the combination of {{{}COMPACT STORAGE{}}}, paging and and old protocol version, probably the easiest workaround until we get a fix is setting the driver to use a most recent version of the native transport protocol. > IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling > upgrade > --- > > Key: CASSANDRA-17507 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17507 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Coordination >Reporter: Thomas Steinmaurer >Priority: Normal > Fix For: 4.0.x > > > In a 6 node 3.11.12 test cluster - freshly set up, thus no legacy SSTables > etc. - with ~ 1TB SSTables on disk per node, I have been running a rolling > upgrade to 4.0.3. On upgraded 4.0.3 nodes I then have seen the following > exception regularly, which disappeared once all 6 nodes have been on 4.0.3. > Is this known? Can this be ignored? As said, just a test drive, but not sure > if we want to have that in production, especially with a larger number of > nodes, where it could take some time, until all are upgraded. Thanks! > {code} > ERROR [Native-Transport-Requests-8] 2022-03-30 11:30:24,057 > ErrorMessage.java:457 - Unexpected exception during request > java.lang.IllegalArgumentException: newLimit > capacity: (290 > 15) > at java.base/java.nio.Buffer.createLimitException(Buffer.java:372) > at java.base/java.nio.Buffer.limit(Buffer.java:346) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:262) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:107) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:39) > at > org.apache.cassandra.db.marshal.ValueAccessor.sliceWithShortLength(ValueAccessor.java:225) > at > org.apache.cassandra.db.marshal.CompositeType.splitName(CompositeType.java:222) > at > org.apache.cassandra.service.pager.PagingState$RowMark.decodeClustering(PagingState.java:434) > at > org.apache.cassandra.service.pager.PagingState$RowMark.clustering(PagingState.java:388) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:88) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:32) >
[jira] [Commented] (CASSANDRA-17507) IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling upgrade
[ https://issues.apache.org/jira/browse/CASSANDRA-17507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656119#comment-17656119 ] Andres de la Peña commented on CASSANDRA-17507: --- [This JVM dtest|https://github.com/apache/cassandra/compare/trunk...adelapena:cassandra:17507-4.0?expand=1] reproduces the bug. It testes a 3.x -> 4.0 rolling upgrade scenario with a table with {{COMPACT STORAGE}} and a query over that uses paging. The bug only seems to manifest itself when the driver uses native protocol v3, instead on the default (v5 for 4.0 and v4 for 3.11). The tests results can be found [here|https://app.circleci.com/pipelines/github/adelapena/cassandra/2536/workflows/5791569d-8ea1-42b5-bacd-bd8716afaee8/jobs/25163]. The artifacts stored for each test contain an identical stacktrace, for example [this one|https://output.circle-artifacts.com/output/job/f4cbecbc-92dd-49c8-a75d-a5a7b53bcd21/artifacts/0/stdout/fails/1/org.apache.cassandra.distributed.upgrade.CompactStoragePagingTest%23testPagingWithCompactStorageAndProtocolVersion.txt] If this actually is caused by the combination of {{{}COMPACT STORAGE{}}}, paging and and old protocol version, probably the easiest workaround until we get a fix is setting the driver to use a most recent version of the native transport protocol. > IllegalArgumentException in query code path during 3.11.12 => 4.0.3 rolling > upgrade > --- > > Key: CASSANDRA-17507 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17507 > Project: Cassandra > Issue Type: Bug > Components: Consistency/Coordination >Reporter: Thomas Steinmaurer >Priority: Normal > Fix For: 4.0.x > > > In a 6 node 3.11.12 test cluster - freshly set up, thus no legacy SSTables > etc. - with ~ 1TB SSTables on disk per node, I have been running a rolling > upgrade to 4.0.3. On upgraded 4.0.3 nodes I then have seen the following > exception regularly, which disappeared once all 6 nodes have been on 4.0.3. > Is this known? Can this be ignored? As said, just a test drive, but not sure > if we want to have that in production, especially with a larger number of > nodes, where it could take some time, until all are upgraded. Thanks! > {code} > ERROR [Native-Transport-Requests-8] 2022-03-30 11:30:24,057 > ErrorMessage.java:457 - Unexpected exception during request > java.lang.IllegalArgumentException: newLimit > capacity: (290 > 15) > at java.base/java.nio.Buffer.createLimitException(Buffer.java:372) > at java.base/java.nio.Buffer.limit(Buffer.java:346) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107) > at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:262) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:107) > at > org.apache.cassandra.db.marshal.ByteBufferAccessor.slice(ByteBufferAccessor.java:39) > at > org.apache.cassandra.db.marshal.ValueAccessor.sliceWithShortLength(ValueAccessor.java:225) > at > org.apache.cassandra.db.marshal.CompositeType.splitName(CompositeType.java:222) > at > org.apache.cassandra.service.pager.PagingState$RowMark.decodeClustering(PagingState.java:434) > at > org.apache.cassandra.service.pager.PagingState$RowMark.clustering(PagingState.java:388) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:88) > at > org.apache.cassandra.service.pager.SinglePartitionPager.nextPageReadQuery(SinglePartitionPager.java:32) > at > org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:69) > at > org.apache.cassandra.service.pager.SinglePartitionPager.fetchPage(SinglePartitionPager.java:32) > at > org.apache.cassandra.cql3.statements.SelectStatement$Pager$NormalPager.fetchPage(SelectStatement.java:352) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:400) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:250) > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:88) > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:244) > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:723) > at > org.apache.cassandra.cql3.QueryProcessor.processPrepared(QueryProcessor.java:701) > at > org.apache.cassandra.transport.messages.ExecuteMessage.execute(ExecuteMessage.java:159) > at > org.apache.cassandra.transport.Message$Request.execute(Message.java:242) > at > org.apache.cassandra.transport.Dispatcher.proces
[jira] [Comment Edited] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name
[ https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656109#comment-17656109 ] Stefan Miklosovic edited comment on CASSANDRA-14361 at 1/9/23 2:03 PM: --- I incorporated the feedback of [~adelapena] in my branch here (rebased + squashed what was there and added the changes on top in one commit). I have added that parameter in seed provider and removed it from top-level cassandra.yaml. [~adelapena] would you mind to review again? PR: https://github.com/apache/cassandra/pull/2067/commits the build is running here: https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2167/ EDIT: btw I am not completely sure how to test this other than empirically. The problem I see, if I remember correctly, is that Mockito is not able to mock static methods and we are using such method in that logic. Ideally I would like to say what IPs so and so hostname returns and based on the configuration property in seed provider I would assert their length however I just see that mocking of static methods was not possible prior Mockito 3.4.0 and we are on 4.7.0 so it should be possible. I ll try to do a test as well but I think it is good to review already. was (Author: smiklosovic): I incorporated the feedback of [~adelapena] in my branch here (rebased + squashed what was there and added the changes on top in one commit). I have added that parameter in seed provider and removed it from top-level cassandra.yaml. [~adelapena] would you mind to review again? PR: https://github.com/apache/cassandra/pull/2067/commits the build is running here: https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2167/ > Allow SimpleSeedProvider to resolve multiple IPs per DNS name > - > > Key: CASSANDRA-14361 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14361 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Ben Bromhead >Assignee: Stefan Miklosovic >Priority: Low > Fix For: 4.x > > Time Spent: 50m > Remaining Estimate: 0h > > Currently SimpleSeedProvider can accept a comma separated string of IPs or > hostnames as the set of Cassandra seeds. hostnames are resolved via > InetAddress.getByName, which will only return the first IP associated with an > A, or CNAME record. > By changing to InetAddress.getAllByName, existing behavior is preserved, but > now Cassandra can discover multiple IP address per record, allowing seed > discovery by DNS to be a little easier. > Some examples of improved workflows with this change include: > * specify the DNS name of a headless service in Kubernetes which will > resolve to all IP addresses of pods within that service. > * seed discovery for multi-region clusters via AWS route53, AzureDNS etc > * Other common DNS service discovery mechanisms. > The only behavior this is likely to impact would be where users are relying > on the fact that getByName only returns a single IP address. > I can't imagine any scenario where that is a sane choice. Even when that > choice has been made, it only impacts the first startup of Cassandra and > would not be on any critical path. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name
[ https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Miklosovic updated CASSANDRA-14361: -- Fix Version/s: 4.x (was: 4.0.x) > Allow SimpleSeedProvider to resolve multiple IPs per DNS name > - > > Key: CASSANDRA-14361 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14361 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Ben Bromhead >Assignee: Stefan Miklosovic >Priority: Low > Fix For: 4.x > > Time Spent: 50m > Remaining Estimate: 0h > > Currently SimpleSeedProvider can accept a comma separated string of IPs or > hostnames as the set of Cassandra seeds. hostnames are resolved via > InetAddress.getByName, which will only return the first IP associated with an > A, or CNAME record. > By changing to InetAddress.getAllByName, existing behavior is preserved, but > now Cassandra can discover multiple IP address per record, allowing seed > discovery by DNS to be a little easier. > Some examples of improved workflows with this change include: > * specify the DNS name of a headless service in Kubernetes which will > resolve to all IP addresses of pods within that service. > * seed discovery for multi-region clusters via AWS route53, AzureDNS etc > * Other common DNS service discovery mechanisms. > The only behavior this is likely to impact would be where users are relying > on the fact that getByName only returns a single IP address. > I can't imagine any scenario where that is a sane choice. Even when that > choice has been made, it only impacts the first startup of Cassandra and > would not be on any critical path. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name
[ https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Miklosovic updated CASSANDRA-14361: -- Test and Documentation Plan: ci Status: Patch Available (was: In Progress) > Allow SimpleSeedProvider to resolve multiple IPs per DNS name > - > > Key: CASSANDRA-14361 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14361 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Ben Bromhead >Assignee: Stefan Miklosovic >Priority: Low > Fix For: 4.0.x > > Time Spent: 50m > Remaining Estimate: 0h > > Currently SimpleSeedProvider can accept a comma separated string of IPs or > hostnames as the set of Cassandra seeds. hostnames are resolved via > InetAddress.getByName, which will only return the first IP associated with an > A, or CNAME record. > By changing to InetAddress.getAllByName, existing behavior is preserved, but > now Cassandra can discover multiple IP address per record, allowing seed > discovery by DNS to be a little easier. > Some examples of improved workflows with this change include: > * specify the DNS name of a headless service in Kubernetes which will > resolve to all IP addresses of pods within that service. > * seed discovery for multi-region clusters via AWS route53, AzureDNS etc > * Other common DNS service discovery mechanisms. > The only behavior this is likely to impact would be where users are relying > on the fact that getByName only returns a single IP address. > I can't imagine any scenario where that is a sane choice. Even when that > choice has been made, it only impacts the first startup of Cassandra and > would not be on any critical path. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14361) Allow SimpleSeedProvider to resolve multiple IPs per DNS name
[ https://issues.apache.org/jira/browse/CASSANDRA-14361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656109#comment-17656109 ] Stefan Miklosovic commented on CASSANDRA-14361: --- I incorporated the feedback of [~adelapena] in my branch here (rebased + squashed what was there and added the changes on top in one commit). I have added that parameter in seed provider and removed it from top-level cassandra.yaml. [~adelapena] would you mind to review again? PR: https://github.com/apache/cassandra/pull/2067/commits the build is running here: https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2167/ > Allow SimpleSeedProvider to resolve multiple IPs per DNS name > - > > Key: CASSANDRA-14361 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14361 > Project: Cassandra > Issue Type: Improvement > Components: Local/Config >Reporter: Ben Bromhead >Assignee: Stefan Miklosovic >Priority: Low > Fix For: 4.0.x > > Time Spent: 50m > Remaining Estimate: 0h > > Currently SimpleSeedProvider can accept a comma separated string of IPs or > hostnames as the set of Cassandra seeds. hostnames are resolved via > InetAddress.getByName, which will only return the first IP associated with an > A, or CNAME record. > By changing to InetAddress.getAllByName, existing behavior is preserved, but > now Cassandra can discover multiple IP address per record, allowing seed > discovery by DNS to be a little easier. > Some examples of improved workflows with this change include: > * specify the DNS name of a headless service in Kubernetes which will > resolve to all IP addresses of pods within that service. > * seed discovery for multi-region clusters via AWS route53, AzureDNS etc > * Other common DNS service discovery mechanisms. > The only behavior this is likely to impact would be where users are relying > on the fact that getByName only returns a single IP address. > I can't imagine any scenario where that is a sane choice. Even when that > choice has been made, it only impacts the first startup of Cassandra and > would not be on any critical path. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18126) Add to the IntelliJ Git Window issue navigation links to Cassandra's Jira
[ https://issues.apache.org/jira/browse/CASSANDRA-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Miklosovic updated CASSANDRA-18126: -- Fix Version/s: 3.0.x 3.11.x 4.0.x 4.1.x > Add to the IntelliJ Git Window issue navigation links to Cassandra's Jira > - > > Key: CASSANDRA-18126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18126 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x > > Attachments: Cassandra Apache Jira Link.png > > Time Spent: 10m > Remaining Estimate: 0h > > It is possible to navigate from the IntelliJ IDEA Git window to a > corresponding Cassandra issue, the Apache Jira if mentioned in the git > message. The example in the attachments shows how _CASSANDRA-*_ letters are > turned on in the commit message to an appropriate Cassandra Jira link. > We should update the IntelliJ IDEA configuration and make this behaviour a > default for the {{ant generate-idea-files}} process. > To achieve this manually you can update your {{.idea/vcs.xml}} file in the > Cassandra project with the following: > {code:java} > > > > > > > > > > > name="linkRegexp"value="https://issues.apache.org/jira/browse/CASSANDRA-$1"/> > > > > > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18126) Add to the IntelliJ Git Window issue navigation links to Cassandra's Jira
[ https://issues.apache.org/jira/browse/CASSANDRA-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656107#comment-17656107 ] Stefan Miklosovic commented on CASSANDRA-18126: --- Thanks. Since this is in "ide" dir and it does not have any impact on the Cassandra code in runtime and we do not ship it either, I do not find the building of 5 branches in CI reasonable so we will go without. If somebody insists, the build for 3.0 is here https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2165 > Add to the IntelliJ Git Window issue navigation links to Cassandra's Jira > - > > Key: CASSANDRA-18126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18126 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: Normal > Fix For: 4.x > > Attachments: Cassandra Apache Jira Link.png > > Time Spent: 10m > Remaining Estimate: 0h > > It is possible to navigate from the IntelliJ IDEA Git window to a > corresponding Cassandra issue, the Apache Jira if mentioned in the git > message. The example in the attachments shows how _CASSANDRA-*_ letters are > turned on in the commit message to an appropriate Cassandra Jira link. > We should update the IntelliJ IDEA configuration and make this behaviour a > default for the {{ant generate-idea-files}} process. > To achieve this manually you can update your {{.idea/vcs.xml}} file in the > Cassandra project with the following: > {code:java} > > > > > > > > > > > name="linkRegexp"value="https://issues.apache.org/jira/browse/CASSANDRA-$1"/> > > > > > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656102#comment-17656102 ] maxwellguo edited comment on CASSANDRA-18061 at 1/9/23 1:22 PM: v40 and v41 are all ok , I add both of them now, both 40 and 41 to upgrade to 42 will be test ,and the jvm-dtest seems green now. pr : https://github.com/apache/cassandra/pull/2047/commits java8 precommit : https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/361/workflows/ff85f81b-6100-43fa-b897-75a6ee17fd5e For me I think we do not need a compaction_history_v2 for the compaction_type is a newly added column and when people upgrade from lower version to some version that added the compaction_type existed syystem table , the column data is null and an UN_KNOW flag will return if the column added is finally used。 I have check the table described in https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java It seems that all of the schema (primary key or column name) have changed . So if upgrade from lower version I think the data must be migrated. for LEGACY_PEERS/PEER_EVENTS_V2/TABLE_ESTIMATES/SSTABLE_ACTIVITY_V2 primary key is changed. FOR AVAILABLE_RANGES_V2 column name is changed. But compaction_type is only a newly added column and the primary key nor cluster name is not change. The original data can be read , and The new compaction_type column that have no data will return UN_KNOW(I have just test in my own cluster.) Besides for system schema PAXOS_REPAIR_HISTORY I saw it is add in Cassandra-4.1 but none SystemKeyspaceMigrator41 is needed and for [CASSANDRA-10857|https://issues.apache.org/jira/browse/CASSANDRA-10857] a new column "value" is also added for "IndexInfo" system table [code line 142 SystemKeyspace.java |https://github.com/apache/cassandra/commit/07fbd8ee6042797aaade90357d625ba9d79c31e0#diff-f57518f964c71328146aeca95be5e697ca81a77261719eeef4dd4b1ed8daf63bR142] and none SystemKeyspaceMigrator41 is needed too, patched by [~ifesdjeen] So it seem no SystemKeyspaceMigrator41 is needed for a newly added column . Also I think this is a little patch , and tt doesn't even matter.So I am ok if we just set the Status to "wan't fix" :) was (Author: maxwellguo): v40 and v41 are all ok , I add both of them now. java8 precommit : https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/361/workflows/ff85f81b-6100-43fa-b897-75a6ee17fd5e For me I think we do not need a compaction_history_v2 for the compaction_type is a newly added column and when people upgrade from lower version to some version that added the compaction_type existed syystem table , the column data is null and an UN_KNOW flag will return if the column added is finally used。 I have check the table described in https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java It seems that all of the schema (primary key or column name) have changed . So if upgrade from lower version I think the data must be migrated. for LEGACY_PEERS/PEER_EVENTS_V2/TABLE_ESTIMATES/SSTABLE_ACTIVITY_V2 primary key is changed. FOR AVAILABLE_RANGES_V2 column name is changed. But compaction_type is only a newly added column and the primary key nor cluster name is not change. The original data can be read , and The new compaction_type column that have no data will return UN_KNOW(I have just test in my own cluster.) Besides for system schema PAXOS_REPAIR_HISTORY I saw it is add in Cassandra-4.1 but none SystemKeyspaceMigrator41 is needed and for [CASSANDRA-10857|https://issues.apache.org/jira/browse/CASSANDRA-10857] a new column "value" is also added for "IndexInfo" system table [code line 142 SystemKeyspace.java |https://github.com/apache/cassandra/commit/07fbd8ee6042797aaade90357d625ba9d79c31e0#diff-f57518f964c71328146aeca95be5e697ca81a77261719eeef4dd4b1ed8daf63bR142] and none SystemKeyspaceMigrator41 is needed too, patched by [~ifesdjeen] So it seem no SystemKeyspaceMigrator41 is needed for a newly added column . Also I think this is a little patch , and tt doesn't even matter.So I am ok if we just set the Status to "wan't fix" :) > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to s
[jira] [Commented] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656102#comment-17656102 ] maxwellguo commented on CASSANDRA-18061: v40 and v41 are all ok , I add both of them now. java8 precommit : https://app.circleci.com/pipelines/github/Maxwell-Guo/cassandra/361/workflows/ff85f81b-6100-43fa-b897-75a6ee17fd5e For me I think we do not need a compaction_history_v2 for the compaction_type is a newly added column and when people upgrade from lower version to some version that added the compaction_type existed syystem table , the column data is null and an UN_KNOW flag will return if the column added is finally used。 I have check the table described in https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java It seems that all of the schema (primary key or column name) have changed . So if upgrade from lower version I think the data must be migrated. for LEGACY_PEERS/PEER_EVENTS_V2/TABLE_ESTIMATES/SSTABLE_ACTIVITY_V2 primary key is changed. FOR AVAILABLE_RANGES_V2 column name is changed. But compaction_type is only a newly added column and the primary key nor cluster name is not change. The original data can be read , and The new compaction_type column that have no data will return UN_KNOW(I have just test in my own cluster.) Besides for system schema PAXOS_REPAIR_HISTORY I saw it is add in Cassandra-4.1 but none SystemKeyspaceMigrator41 is needed and for [CASSANDRA-10857|https://issues.apache.org/jira/browse/CASSANDRA-10857] a new column "value" is also added for "IndexInfo" system table [code line 142 SystemKeyspace.java |https://github.com/apache/cassandra/commit/07fbd8ee6042797aaade90357d625ba9d79c31e0#diff-f57518f964c71328146aeca95be5e697ca81a77261719eeef4dd4b1ed8daf63bR142] and none SystemKeyspaceMigrator41 is needed too, patched by [~ifesdjeen] So it seem no SystemKeyspaceMigrator41 is needed for a newly added column . Also I think this is a little patch , and tt doesn't even matter.So I am ok if we just set the Status to "wan't fix" :) > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or nodetool compactionhistory to see whether > we have made a major compact for this node .:) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-15234) Standardise config and JVM parameters
[ https://issues.apache.org/jira/browse/CASSANDRA-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656068#comment-17656068 ] Alex Petrov commented on CASSANDRA-15234: - [~e.dimitrova] I realise that this was committed a while ago, but I got a question: there is a small change in output of nodetool commands that use `FileUtils#stringifyFileSize` that might break any parser that relies on the format to be in KB not in KiB etc. This constitutes a regression, doesn't it, as we've set a precedent in -CASSANDRA-17683?- cc [~dcapwell], as we've briefly talked about this. > Standardise config and JVM parameters > - > > Key: CASSANDRA-15234 > URL: https://issues.apache.org/jira/browse/CASSANDRA-15234 > Project: Cassandra > Issue Type: Bug > Components: Local/Config >Reporter: Benedict Elliott Smith >Assignee: Ekaterina Dimitrova >Priority: Normal > Fix For: 4.1-alpha1, 4.1 > > Attachments: CASSANDRA-15234-3-DTests-JAVA8.txt > > > We have a bunch of inconsistent names and config patterns in the codebase, > both from the yams and JVM properties. It would be nice to standardise the > naming (such as otc_ vs internode_) as well as the provision of values with > units - while maintaining perpetual backwards compatibility with the old > parameter names, of course. > For temporal units, I would propose parsing strings with suffixes of: > {{code}} > u|micros(econds?)? > ms|millis(econds?)? > s(econds?)? > m(inutes?)? > h(ours?)? > d(ays?)? > mo(nths?)? > {{code}} > For rate units, I would propose parsing any of the standard {{B/s, KiB/s, > MiB/s, GiB/s, TiB/s}}. > Perhaps for avoiding ambiguity we could not accept bauds {{bs, Mbps}} or > powers of 1000 such as {{KB/s}}, given these are regularly used for either > their old or new definition e.g. {{KiB/s}}, or we could support them and > simply log the value in bytes/s. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656053#comment-17656053 ] Brandon Williams commented on CASSANDRA-18061: -- bq. Am I correct we need to support 4.0 -> 5.0 upgrade I think that is something we would like to have, yes. > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or nodetool compactionhistory to see whether > we have made a major compact for this node .:) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656006#comment-17656006 ] Stefan Miklosovic edited comment on CASSANDRA-18061 at 1/9/23 12:05 PM: [~maxwellguo] , the problem I see with that failing test I mentioned in my last reply is that we are trying to upgrade from 4.0 to 4.2 and it fails. What you did is that you changed it in such a way that you are upgrading from 4.1. I do not think this is the fix. I believe we have to be able to upgrade straight from 4.0 to 4.2 (or to 5.0 if 4.2 will be eventually called like that). The reason that test fails is that after upgrade, 4.x node tries to read / write to system.compaction_history table from / to column called "compaction_type" but schemas from 4.0 do not have that column yet. There is this class (1) which _migrates_ system schemas from older to newer schema (whatever that means) by creating _v2 tables and copying all data there (and optionally modifying it). I think that we should do something similar here so we would have a table called "system.compaction_history_v2" with this new column and all interaction would be done with it. However, I am not completely sure if we want to introduce new compaction_history_v2 table into system keyspace just to be able to see compaction types. I am summoning the heavyweights to know their opinion on this: [~brandon.williams] [~mck]. Am I correct we need to support 4.0 -> 5.0 upgrade so we need to migrate compaction_history to a new table? I am not completely sure what "to upgrade" means in this context. EDIT: I see that system.compaction_history is present in 3.0 already so if somebody upgrades from 3.0 to 4.2, he will hit that issue. On the other hand, it does not migrate _all_ tables so it seems to me like it counts on the fact that system keyspace is completely wiped out prior upgrade except the tables it specifically migrates? So in that case we should be safe to have a test which upgrades just from 4.1 up? Thanks [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java] was (Author: smiklosovic): [~maxwellguo] , the problem I see with that failing test I mentioned in my last reply is that we are trying to upgrade from 4.0 to 4.2 and it fails. What you did is that you changed it in such a way that you are upgrading from 4.1. I do not think this is the fix. I believe we have to be able to upgrade straight from 4.0 to 4.2 (or to 5.0 if 4.2 will be eventually called like that). The reason that test fails is that after upgrade, 4.x node tries to read / write to system.compaction_history table from / to column called "compaction_type" but schemas from 4.0 do not have that column yet. There is this class (1) which _migrates_ system schemas from older to newer schema (whatever that means) by creating _v2 tables and copying all data there (and optionally modifying it). I think that we should do something similar here so we would have a table called "system.compaction_history_v2" with this new column and all interaction would be done with it. However, I am not completely sure if we want to introduce new compaction_history_v2 table into system keyspace just to be able to see compaction types. I am summoning the heavyweights to know their opinion on this: [~brandon.williams] [~mck]. Am I correct we need to support 4.0 -> 5.0 upgrade so we need to migrate compaction_history to a new table? I am not completely sure what "to upgrade" means in this context. Thanks [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java] > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or n
[jira] [Commented] (CASSANDRA-18126) Add to the IntelliJ Git Window issue navigation links to Cassandra's Jira
[ https://issues.apache.org/jira/browse/CASSANDRA-18126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656048#comment-17656048 ] Aleksey Yeschenko commented on CASSANDRA-18126: --- No reason to not commit to 3.0+. Please go ahead, thanks Stefan. > Add to the IntelliJ Git Window issue navigation links to Cassandra's Jira > - > > Key: CASSANDRA-18126 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18126 > Project: Cassandra > Issue Type: Task > Components: Build >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: Normal > Fix For: 4.x > > Attachments: Cassandra Apache Jira Link.png > > Time Spent: 10m > Remaining Estimate: 0h > > It is possible to navigate from the IntelliJ IDEA Git window to a > corresponding Cassandra issue, the Apache Jira if mentioned in the git > message. The example in the attachments shows how _CASSANDRA-*_ letters are > turned on in the commit message to an appropriate Cassandra Jira link. > We should update the IntelliJ IDEA configuration and make this behaviour a > default for the {{ant generate-idea-files}} process. > To achieve this manually you can update your {{.idea/vcs.xml}} file in the > Cassandra project with the following: > {code:java} > > > > > > > > > > > name="linkRegexp"value="https://issues.apache.org/jira/browse/CASSANDRA-$1"/> > > > > > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-17964) Some tests are never executed due to naming violation - fix it and add checkstyle where applicable
[ https://issues.apache.org/jira/browse/CASSANDRA-17964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656036#comment-17656036 ] Stefan Miklosovic commented on CASSANDRA-17964: --- I am transitioning this back to "patch available" as it was quite and I need to double check all is fine after recent 4.1 release etc ... I guess additional reviews would not hurt either. > Some tests are never executed due to naming violation - fix it and add > checkstyle where applicable > -- > > Key: CASSANDRA-17964 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17964 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Ruslan Fomkin >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x > > Time Spent: 2h 40m > Remaining Estimate: 0h > > [BatchTests|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/cql3/BatchTests.java] > doesn't follow naming convention to be run as unit tests and, thus, is never > run. > The rule in build expects names as `*Test`. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-17964) Some tests are never executed due to naming violation - fix it and add checkstyle where applicable
[ https://issues.apache.org/jira/browse/CASSANDRA-17964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656036#comment-17656036 ] Stefan Miklosovic edited comment on CASSANDRA-17964 at 1/9/23 11:18 AM: I am transitioning this back to "patch available" as it was quite a while and I need to double check all is fine after recent 4.1 release etc ... I guess additional reviews would not hurt either. was (Author: smiklosovic): I am transitioning this back to "patch available" as it was quite and I need to double check all is fine after recent 4.1 release etc ... I guess additional reviews would not hurt either. > Some tests are never executed due to naming violation - fix it and add > checkstyle where applicable > -- > > Key: CASSANDRA-17964 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17964 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Ruslan Fomkin >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x > > Time Spent: 2h 40m > Remaining Estimate: 0h > > [BatchTests|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/cql3/BatchTests.java] > doesn't follow naming convention to be run as unit tests and, thus, is never > run. > The rule in build expects names as `*Test`. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-17964) Some tests are never executed due to naming violation - fix it and add checkstyle where applicable
[ https://issues.apache.org/jira/browse/CASSANDRA-17964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Miklosovic updated CASSANDRA-17964: -- Status: Patch Available (was: Ready to Commit) > Some tests are never executed due to naming violation - fix it and add > checkstyle where applicable > -- > > Key: CASSANDRA-17964 > URL: https://issues.apache.org/jira/browse/CASSANDRA-17964 > Project: Cassandra > Issue Type: Task > Components: Test/unit >Reporter: Ruslan Fomkin >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x > > Time Spent: 2h 40m > Remaining Estimate: 0h > > [BatchTests|https://github.com/apache/cassandra/blob/trunk/test/unit/org/apache/cassandra/cql3/BatchTests.java] > doesn't follow naming convention to be run as unit tests and, thus, is never > run. > The rule in build expects names as `*Test`. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18032) When generate.sh fails its rc=0
[ https://issues.apache.org/jira/browse/CASSANDRA-18032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656034#comment-17656034 ] Andres de la Peña commented on CASSANDRA-18032: --- Looks good to me, +1 > When generate.sh fails its rc=0 > --- > > Key: CASSANDRA-18032 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18032 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: David Capwell >Assignee: Berenguer Blasi >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x > > > {code} > $ ./generate.sh -a > Generating new config.yml file with low resources and LOWRES/MIDRES/HIGHRES > templates from config-2_1.yml > ./generate.sh: line 171: circleci: command not found > patching file ./config-2_1.yml > Hunk #4 succeeded at 1511 (offset 9 lines). > Hunk #5 succeeded at 1525 (offset 9 lines). > Hunk #6 succeeded at 1540 (offset 9 lines). > Hunk #7 succeeded at 1554 (offset 9 lines). > Hunk #8 succeeded at 1569 (offset 9 lines). > Hunk #9 succeeded at 1583 (offset 9 lines). > Hunk #10 succeeded at 1598 (offset 9 lines). > Hunk #11 succeeded at 1616 (offset 9 lines). > Hunk #12 succeeded at 1631 (offset 9 lines). > Hunk #13 succeeded at 1649 (offset 9 lines). > Hunk #14 succeeded at 1664 (offset 9 lines). > Hunk #15 succeeded at 1682 (offset 9 lines). > Hunk #16 succeeded at 1697 (offset 9 lines). > ./generate.sh: line 177: circleci: command not found > patching file ./config-2_1.yml > ./generate.sh: line 183: circleci: command not found > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-18032) When generate.sh fails its rc=0
[ https://issues.apache.org/jira/browse/CASSANDRA-18032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andres de la Peña updated CASSANDRA-18032: -- Reviewers: Andres de la Peña > When generate.sh fails its rc=0 > --- > > Key: CASSANDRA-18032 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18032 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: David Capwell >Assignee: Berenguer Blasi >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x > > > {code} > $ ./generate.sh -a > Generating new config.yml file with low resources and LOWRES/MIDRES/HIGHRES > templates from config-2_1.yml > ./generate.sh: line 171: circleci: command not found > patching file ./config-2_1.yml > Hunk #4 succeeded at 1511 (offset 9 lines). > Hunk #5 succeeded at 1525 (offset 9 lines). > Hunk #6 succeeded at 1540 (offset 9 lines). > Hunk #7 succeeded at 1554 (offset 9 lines). > Hunk #8 succeeded at 1569 (offset 9 lines). > Hunk #9 succeeded at 1583 (offset 9 lines). > Hunk #10 succeeded at 1598 (offset 9 lines). > Hunk #11 succeeded at 1616 (offset 9 lines). > Hunk #12 succeeded at 1631 (offset 9 lines). > Hunk #13 succeeded at 1649 (offset 9 lines). > Hunk #14 succeeded at 1664 (offset 9 lines). > Hunk #15 succeeded at 1682 (offset 9 lines). > Hunk #16 succeeded at 1697 (offset 9 lines). > ./generate.sh: line 177: circleci: command not found > patching file ./config-2_1.yml > ./generate.sh: line 183: circleci: command not found > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12525) When adding new nodes to a cluster which has authentication enabled, we end up losing cassandra user's current crendentials and they get reverted back to default c
[ https://issues.apache.org/jira/browse/CASSANDRA-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656032#comment-17656032 ] Stefan Miklosovic commented on CASSANDRA-12525: --- Hi [~xgerman42] , thanks for being so persistent! I have checked your latest changes and the test is not doing what I was mentioning in my last comment. My suggestion was: * start the first node (done) * change the password (not done) * partition the network (done) * start the second node (done) * check that it created the default role (not done) * "unpartition" the network (not done) * repair the second node (not done) * you should be able to connect to the second node with the changed password (not done) Doing CQL against a node can be done through Cassandra Java driver (logging in, changing password ...). Repairing of the node can be done via nodetool. There is "nodetool" method on IInstance you get from calling cluster.get like "cluster.get(2).nodetool("repair")". You got the idea. Do you plan to finish this or do you have any other idea how how to test this differently? I humbly think the approach I outlined is the most comprehensive in order to mimic the real-world usage here. Thanks > When adding new nodes to a cluster which has authentication enabled, we end > up losing cassandra user's current crendentials and they get reverted back to > default cassandra/cassandra crendetials > - > > Key: CASSANDRA-12525 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12525 > Project: Cassandra > Issue Type: Bug > Components: Cluster/Schema, Local/Config >Reporter: Atin Sood >Assignee: German Eichberger >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 4.x > > Time Spent: 2h 40m > Remaining Estimate: 0h > > Made the following observation: > When adding new nodes to an existing C* cluster with authentication enabled > we end up loosing password information about `cassandra` user. > Initial Setup > - Create a 5 node cluster with system_auth having RF=5 and > NetworkTopologyStrategy > - Enable PasswordAuthenticator on this cluster and update the password for > 'cassandra' user to say 'password' via the alter query > - Make sure you run nodetool repair on all the nodes > Test case > - Now go ahead and add 5 more nodes to this cluster. > - Run nodetool repair on all the 10 nodes now > - Decommission the original 5 nodes such that only the new 5 nodes are in the > cluster now > - Run cqlsh and try to connect to this cluster using old user name and > password, cassandra/password > I was unable to connect to the nodes with the original credentials and was > only able to connect using the default cassandra/cassandra credentials > From the conversation over IIRC > `beobal: sood: that definitely shouldn't happen. The new nodes should only > create the default superuser role if there are 0 roles currently defined > (including that default one)` -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14013) Data loss in snapshots keyspace after service restart
[ https://issues.apache.org/jira/browse/CASSANDRA-14013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656028#comment-17656028 ] Stefan Miklosovic edited comment on CASSANDRA-14013 at 1/9/23 10:47 AM: The problem is that when we are trying to get a descriptor for a legacy sstable, the test is going to find the first file in the dir and it might happen that it will return ".txt" file. But Descriptor.LEGACY_SSTABLE_DIR_PATTERN is ending on ".db". We should just do this [https://github.com/pauloricardomg/cassandra/pull/2] I am running the build for trunk with that PR included here [https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2166/] was (Author: smiklosovic): The problem is that when we are trying to get a descriptor for a legacy sstable, the test is going to find a first file in the dir and it might happen that it will return ".txt" file. But Descriptor.LEGACY_SSTABLE_DIR_PATTERN is ending on ".db". We should just do this [https://github.com/pauloricardomg/cassandra/pull/2] I am running the build for trunk with that PR included here [https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2166/] > Data loss in snapshots keyspace after service restart > - > > Key: CASSANDRA-14013 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14013 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core, Local/Snapshots >Reporter: Gregor Uhlenheuer >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 4.0.x, 4.1.x, 4.x > > Time Spent: 10m > Remaining Estimate: 0h > > I am posting this bug in hope to discover the stupid mistake I am doing > because I can't imagine a reasonable answer for the behavior I see right now > :-) > In short words, I do observe data loss in a keyspace called *snapshots* after > restarting the Cassandra service. Say I do have 1000 records in a table > called *snapshots.test_idx* then after restart the table has less entries or > is even empty. > My kind of "mysterious" observation is that it happens only in a keyspace > called *snapshots*... > h3. Steps to reproduce > These steps to reproduce show the described behavior in "most" attempts (not > every single time though). > {code} > # create keyspace > CREATE KEYSPACE snapshots WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > # create table > CREATE TABLE snapshots.test_idx (key text, seqno bigint, primary key(key)); > # insert some test data > INSERT INTO snapshots.test_idx (key,seqno) values ('key1', 1); > ... > INSERT INTO snapshots.test_idx (key,seqno) values ('key1000', 1000); > # count entries > SELECT count(*) FROM snapshots.test_idx; > 1000 > # restart service > kill > cassandra -f > # count entries > SELECT count(*) FROM snapshots.test_idx; > 0 > {code} > I hope someone can point me to the obvious mistake I am doing :-) > This happened to me using both Cassandra 3.9 and 3.11.0 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14013) Data loss in snapshots keyspace after service restart
[ https://issues.apache.org/jira/browse/CASSANDRA-14013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656028#comment-17656028 ] Stefan Miklosovic commented on CASSANDRA-14013: --- The problem is that when we are trying to get a descriptor for a legacy sstable, the test is going to find a first file in the dir and it might happen that it will return ".txt" file. But Descriptor.LEGACY_SSTABLE_DIR_PATTERN is ending on ".db". We should just do this [https://github.com/pauloricardomg/cassandra/pull/2] I am running the build for trunk with that PR included here [https://ci-cassandra.apache.org/view/patches/job/Cassandra-devbranch/2166/] > Data loss in snapshots keyspace after service restart > - > > Key: CASSANDRA-14013 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14013 > Project: Cassandra > Issue Type: Bug > Components: Legacy/Core, Local/Snapshots >Reporter: Gregor Uhlenheuer >Assignee: Stefan Miklosovic >Priority: Normal > Fix For: 4.0.x, 4.1.x, 4.x > > Time Spent: 10m > Remaining Estimate: 0h > > I am posting this bug in hope to discover the stupid mistake I am doing > because I can't imagine a reasonable answer for the behavior I see right now > :-) > In short words, I do observe data loss in a keyspace called *snapshots* after > restarting the Cassandra service. Say I do have 1000 records in a table > called *snapshots.test_idx* then after restart the table has less entries or > is even empty. > My kind of "mysterious" observation is that it happens only in a keyspace > called *snapshots*... > h3. Steps to reproduce > These steps to reproduce show the described behavior in "most" attempts (not > every single time though). > {code} > # create keyspace > CREATE KEYSPACE snapshots WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > # create table > CREATE TABLE snapshots.test_idx (key text, seqno bigint, primary key(key)); > # insert some test data > INSERT INTO snapshots.test_idx (key,seqno) values ('key1', 1); > ... > INSERT INTO snapshots.test_idx (key,seqno) values ('key1000', 1000); > # count entries > SELECT count(*) FROM snapshots.test_idx; > 1000 > # restart service > kill > cassandra -f > # count entries > SELECT count(*) FROM snapshots.test_idx; > 0 > {code} > I hope someone can point me to the obvious mistake I am doing :-) > This happened to me using both Cassandra 3.9 and 3.11.0 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18032) When generate.sh fails its rc=0
[ https://issues.apache.org/jira/browse/CASSANDRA-18032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656025#comment-17656025 ] Berenguer Blasi commented on CASSANDRA-18032: - I changed the PR to stick to a simple `set -e` to address [~adelapena] comments. If it looks ok and sbdy can +1 then I'll push the other PRs. > When generate.sh fails its rc=0 > --- > > Key: CASSANDRA-18032 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18032 > Project: Cassandra > Issue Type: Bug > Components: CI >Reporter: David Capwell >Assignee: Berenguer Blasi >Priority: Normal > Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x > > > {code} > $ ./generate.sh -a > Generating new config.yml file with low resources and LOWRES/MIDRES/HIGHRES > templates from config-2_1.yml > ./generate.sh: line 171: circleci: command not found > patching file ./config-2_1.yml > Hunk #4 succeeded at 1511 (offset 9 lines). > Hunk #5 succeeded at 1525 (offset 9 lines). > Hunk #6 succeeded at 1540 (offset 9 lines). > Hunk #7 succeeded at 1554 (offset 9 lines). > Hunk #8 succeeded at 1569 (offset 9 lines). > Hunk #9 succeeded at 1583 (offset 9 lines). > Hunk #10 succeeded at 1598 (offset 9 lines). > Hunk #11 succeeded at 1616 (offset 9 lines). > Hunk #12 succeeded at 1631 (offset 9 lines). > Hunk #13 succeeded at 1649 (offset 9 lines). > Hunk #14 succeeded at 1664 (offset 9 lines). > Hunk #15 succeeded at 1682 (offset 9 lines). > Hunk #16 succeeded at 1697 (offset 9 lines). > ./generate.sh: line 177: circleci: command not found > patching file ./config-2_1.yml > ./generate.sh: line 183: circleci: command not found > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656006#comment-17656006 ] Stefan Miklosovic edited comment on CASSANDRA-18061 at 1/9/23 9:44 AM: --- [~maxwellguo] , the problem I see with that failing test I mentioned in my last reply is that we are trying to upgrade from 4.0 to 4.2 and it fails. What you did is that you changed it in such a way that you are upgrading from 4.1. I do not think this is the fix. I believe we have to be able to upgrade straight from 4.0 to 4.2 (or to 5.0 if 4.2 will be eventually called like that). The reason that test fails is that after upgrade, 4.x node tries to read / write to system.compaction_history table from / to column called "compaction_type" but schemas from 4.0 do not have that column yet. There is this class (1) which _migrates_ system schemas from older to newer schema (whatever that means) by creating _v2 tables and copying all data there (and optionally modifying it). I think that we should do something similar here so we would have a table called "system.compaction_history_v2" with this new column and all interaction would be done with it. However, I am not completely sure if we want to introduce new compaction_history_v2 table into system keyspace just to be able to see compaction types. I am summoning the heavyweights to know their opinion on this: [~brandon.williams] [~mck]. Am I correct we need to support 4.0 -> 5.0 upgrade so we need to migrate compaction_history to a new table? I am not completely sure what "to upgrade" means in this context. Thanks [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java] was (Author: smiklosovic): [~maxwellguo] , the problem I see with that failing test I mentioned in my last reply is that we are trying to upgrade from 4.0 to 4.2 and it fails. What you did is that you changed it in such a way that you are upgrading from 4.1. I do not think this is the fix. I believe we have to be able to upgrade straight from 4.0 to 4.2 (or to 5.0 if 4.2 will be eventually called like that). The reason that test fails is that after upgrade, 4.x node tries to read / write to system.compaction_history table from / to column called "compaction_type" but schemas from 4.0 do not have that column yet. There is this class (1) which _migrates_ system schemas from older to newer schema (whatever that means) by creating _v2 tables and copying all data there (and optionally modifying it). I think that we should do something similar here so we would have a table called "system.compaction_history_v2" with this new column and all interaction would be done with it. However, I am not completely sure if we want to introduce new compaction_history_v2 table into system keyspace just to be able to see compaction types. I am summoning the heavyweights to know their opinion on this: [~brandon.williams] [~mck]. Am I correct we need to support 4.0 -> 5.0 upgrade so we need to migrate compaction_history to a new table? Thanks [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java] > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or nodetool compactionhistory to see whether > we have made a major compact for this node .:) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-18061) Add compaction type output result for nodetool compactionhistory
[ https://issues.apache.org/jira/browse/CASSANDRA-18061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656006#comment-17656006 ] Stefan Miklosovic commented on CASSANDRA-18061: --- [~maxwellguo] , the problem I see with that failing test I mentioned in my last reply is that we are trying to upgrade from 4.0 to 4.2 and it fails. What you did is that you changed it in such a way that you are upgrading from 4.1. I do not think this is the fix. I believe we have to be able to upgrade straight from 4.0 to 4.2 (or to 5.0 if 4.2 will be eventually called like that). The reason that test fails is that after upgrade, 4.x node tries to read / write to system.compaction_history table from / to column called "compaction_type" but schemas from 4.0 do not have that column yet. There is this class (1) which _migrates_ system schemas from older to newer schema (whatever that means) by creating _v2 tables and copying all data there (and optionally modifying it). I think that we should do something similar here so we would have a table called "system.compaction_history_v2" with this new column and all interaction would be done with it. However, I am not completely sure if we want to introduce new compaction_history_v2 table into system keyspace just to be able to see compaction types. I am summoning the heavyweights to know their opinion on this: [~brandon.williams] [~mck]. Am I correct we need to support 4.0 -> 5.0 upgrade so we need to migrate compaction_history to a new table? Thanks [https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SystemKeyspaceMigrator41.java] > Add compaction type output result for nodetool compactionhistory > > > Key: CASSANDRA-18061 > URL: https://issues.apache.org/jira/browse/CASSANDRA-18061 > Project: Cassandra > Issue Type: Improvement > Components: Local/Compaction, Tool/nodetool >Reporter: maxwellguo >Assignee: maxwellguo >Priority: Low > Fix For: 4.x > > Time Spent: 2h 50m > Remaining Estimate: 0h > > If we want to see whether we have made a compaction and what kind of > compaction we have done for this node, we may go to see the > compaction_history system table for some deftails or use nodetool > compactionhistory command , But I found that the table do not specify the > compaction type so does the compactionhistory command too, like index build , > compaction type, clean or scrub for this node. So I think may be it is need > to add a type of compaction column to specify the compaction tpe for > system.compaction_history and so we can get the type of compaction through > system.compaction_history table or nodetool compactionhistory to see whether > we have made a major compact for this node .:) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org