[jira] [Commented] (CASSANDRA-15252) Don't consider current keyspace in prepared statement id when the query is qualified

2021-12-08 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456205#comment-17456205
 ] 

Berenguer Blasi commented on CASSANDRA-15252:
-

[~e.dimitrova] and I did more bisecting 
[here|https://issues.apache.org/jira/browse/CASSANDRA-17140?focusedCommentId=17456203=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17456203]
 and imo it still looks like CASSANDRA-15252 is the best candidate so far.

> Don't consider current keyspace in prepared statement id when the query is 
> qualified
> 
>
> Key: CASSANDRA-15252
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15252
> Project: Cassandra
>  Issue Type: Bug
>  Components: Messaging/Client
>Reporter: Olivier Michallat
>Assignee: Alex Petrov
>Priority: Normal
> Fix For: 3.0.26, 3.11.12, 4.0.2, 4.1
>
>
> {{QueryProcessor.computeId}} takes into account the session's current 
> keyspace in the MD5 digest.
> {code}
> String toHash = keyspace == null ? queryString : keyspace + queryString;
> {code}
> This is desirable for unqualified queries, because switching to a different 
> keyspace produces a different statement. However, for a qualified query, the 
> current keyspace makes no difference, the prepared id should always be the 
> same.
> This can lead to an infinite reprepare loop on the client. Consider this 
> example (Java driver 3.x):
> {code}
> Cluster cluster = null;
> try {
>   cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
>   Session session = cluster.connect();
>   session.execute(
>   "CREATE KEYSPACE IF NOT EXISTS test WITH replication = {'class': 
> 'SimpleStrategy', 'replication_factor': 1}");
>   session.execute("CREATE TABLE IF NOT EXISTS test.foo(k int PRIMARY 
> KEY)");
>   PreparedStatement pst = session.prepare("SELECT * FROM test.foo WHERE 
> k=?");
>   // Drop and recreate the table to invalidate the prepared statement 
> server-side
>   session.execute("DROP TABLE test.foo");
>   session.execute("CREATE TABLE test.foo(k int PRIMARY KEY)");
>   session.execute("USE test");
>   // This will try to reprepare on the fly
>   session.execute(pst.bind(0));
> } finally {
>   if (cluster != null) cluster.close();
> }
> {code}
> When the driver goes to execute the bound statement (last line before the 
> finally block), it will get an UNPREPARED response because the statement was 
> evicted from the server cache (as a result of dropping the table earlier).
> In those cases, the driver recovers transparently by sending another PREPARE 
> message and retrying the bound statement.
> However, that second PREPARE cached the statement under a different id, 
> because we switched to another keyspace. Yet the driver is still using the 
> original id (stored in {{pst}}) when it retries, so it will get UNPREPARED 
> again, etc.
> I would consider this low priority because issuing a {{USE}} statement after 
> having prepared statements is a bad idea to begin with. And even if we fix 
> the generated id for qualified query strings, the issue will remain for 
> unqualified ones.
> We'll add a check in the driver to fail fast and avoid the infinite loop if 
> the id returned by the second PREPARE doesn't match the original one. That 
> might be enough to cover this issue.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17140) Broken test_rolling_upgrade - upgrade_tests.upgrade_through_versions_test.TestUpgrade_indev_3_0_x_To_indev_4_0_x

2021-12-08 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456203#comment-17456203
 ] 

Berenguer Blasi commented on CASSANDRA-17140:
-

||Ticket||SHA||Result||command||
|CASSANDRA-17014|0c4653110d80f411d2c50445d48478967bcaa095|Pass|pytest -vv 
--log-cli-level=DEBUG --junit-xml=nosetests.xml --junit-prefix=dtest-upgrade -s 
--cassandra-dir=/tmp/cberengtrunk --execute-upgrade-tests-only 
--upgrade-target-version-only --upgrade-version-selection all 
upgrade_tests/upgrade_through_versions_test.py::TestUpgrade_current_2_2_x_To_indev_3_0_x::test_rolling_upgrade_with_internode_ssl|
|CASSANDRA-16959|4f8afe85bfb2633d98beed39e665463bf19b8789|Pass|same|
|CASSANDRA-14612|225a4c8faf7a2a67a1a8a360bc4efb70b36f6ae7|Pass|same|
|CASSANDRA-15252|13632e9a99e8256a565bd6919d2d11b3e476e973|Fail|same|

We're back to CASSANDRA-15252 imo

> Broken test_rolling_upgrade - 
> upgrade_tests.upgrade_through_versions_test.TestUpgrade_indev_3_0_x_To_indev_4_0_x
> 
>
> Key: CASSANDRA-17140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17140
> Project: Cassandra
>  Issue Type: Bug
>  Components: CI
>Reporter: Yifan Cai
>Priority: Normal
> Fix For: 4.0.x
>
>
> The tests "test_rolling_upgrade" fail with the below error. 
>  
> [https://app.circleci.com/pipelines/github/yifan-c/cassandra/279/workflows/6340cd42-0b27-42c2-8418-9f8b56c57bea/jobs/1990]
>  
> I am able to alway produce it by running the test locally too. 
> {{$ pytest --execute-upgrade-tests-only --upgrade-target-version-only 
> --upgrade-version-selection all --cassandra-version=4.0 
> upgrade_tests/upgrade_through_versions_test.py::TestUpgrade_indev_3_11_x_To_indev_4_0_x::test_rolling_upgrade}}
>  
> {code:java}
> self = 
>   object at 0x7ffba4242fd0>
> subprocs = [, 
> ]
> def _check_on_subprocs(self, subprocs):
> """
> Check on given subprocesses.
> 
> If any are not alive, we'll go ahead and terminate any remaining 
> alive subprocesses since this test is going to fail.
> """
> subproc_statuses = [s.is_alive() for s in subprocs]
> if not all(subproc_statuses):
> message = "A subprocess has terminated early. Subprocess 
> statuses: "
> for s in subprocs:
> message += "{name} (is_alive: {aliveness}), 
> ".format(name=s.name, aliveness=s.is_alive())
> message += "attempting to terminate remaining subprocesses now."
> self._terminate_subprocs()
> >   raise RuntimeError(message)
> E   RuntimeError: A subprocess has terminated early. Subprocess 
> statuses: Process-1 (is_alive: True), Process-2 (is_alive: False), attempting 
> to terminate remaining subprocesses now.{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17136) FQL: Enabling via nodetool can trigger disk_failure_mode

2021-12-08 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456174#comment-17456174
 ] 

Berenguer Blasi commented on CASSANDRA-17136:
-

^ lgtm. I would add a comment in the test explaining why the file and 
permissions change, otherwise it's difficult to grasp without the context of 
the ticket. ^This is only the 4.0 CI run so +1 conditioned to a successful 
trunk CI.

> FQL: Enabling via nodetool can trigger disk_failure_mode
> 
>
> Key: CASSANDRA-17136
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17136
> Project: Cassandra
>  Issue Type: Bug
>  Components: Tool/fql
>Reporter: Brendan Cicchi
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.0.x
>
>
> When enabling fullquerylog via nodetool, if there is a non empty directory 
> present under the location specified via --path which would trigger an 
> java.nio.file.AccessDeniedException during cleaning, the node will trigger 
> the disk_failure_policy which by default is stop. This is a fairly easy way 
> to offline a cluster if someone executes this in parallel. I don't that think 
> the behavior is desirable for enabling via nodetool.
>  
> Repro (1 node cluster already up):
> {code:bash}
> mkdir /some/path/dir
> touch /some/path/dir/file
> chown -R user: /some/path/dir # Non Cassandra process user
> chmod 700 /some/path/dir
> nodetool enablefullquerylog --path /some/path
> {code}
> Nodetool will give back this error:
> {code:java}
> error: /some/path/dir/file
> -- StackTrace --
> java.nio.file.AccessDeniedException: /some/path/dir/file
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
>   at 
> sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
>   at java.nio.file.Files.delete(Files.java:1126)
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:250)
>   at 
> org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:237)
>   at 
> org.apache.cassandra.utils.binlog.BinLog.deleteRecursively(BinLog.java:492)
>   at 
> org.apache.cassandra.utils.binlog.BinLog.cleanDirectory(BinLog.java:477)
>   at 
> org.apache.cassandra.utils.binlog.BinLog$Builder.build(BinLog.java:436)
>   at 
> org.apache.cassandra.fql.FullQueryLogger.enable(FullQueryLogger.java:106)
>   at 
> org.apache.cassandra.service.StorageService.enableFullQueryLogger(StorageService.java:5915)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:72)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:276)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
>   at 
> com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1401)
>   at 
> javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Commented] (CASSANDRA-17171) Flaky CompactionsBytemanTest

2021-12-08 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456166#comment-17456166
 ] 

Berenguer Blasi commented on CASSANDRA-17171:
-

Thx for the work!

> Flaky CompactionsBytemanTest
> 
>
> Key: CASSANDRA-17171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Berenguer Blasi
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0.2, 4.1
>
>
> See 
> [here|https://ci-cassandra.apache.org/job/Cassandra-trunk/868/testReport/junit/org.apache.cassandra.db.compaction/CompactionsBytemanTest/testCompactingCFCounting/]
> {noformat}
> junit.framework.AssertionFailedError: expected:<0> but was:<1>
>   at 
> org.apache.cassandra.db.compaction.CompactionsBytemanTest.testCompactingCFCounting(CompactionsBytemanTest.java:130)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$10.evaluate(BMUnitRunner.java:393)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$6.evaluate(BMUnitRunner.java:263)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:97)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17171) Flaky CompactionsBytemanTest

2021-12-08 Thread Berenguer Blasi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Berenguer Blasi updated CASSANDRA-17171:

  Fix Version/s: 4.0.2
 4.1
 (was: 4.x)
 (was: 4.0.x)
  Since Version: 4.0.2
Source Control Link: 
https://github.com/apache/cassandra/commit/8cef32ae8376d23828a20b861161bd0d3845456f
 Resolution: Fixed
 Status: Resolved  (was: Ready to Commit)

> Flaky CompactionsBytemanTest
> 
>
> Key: CASSANDRA-17171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Berenguer Blasi
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0.2, 4.1
>
>
> See 
> [here|https://ci-cassandra.apache.org/job/Cassandra-trunk/868/testReport/junit/org.apache.cassandra.db.compaction/CompactionsBytemanTest/testCompactingCFCounting/]
> {noformat}
> junit.framework.AssertionFailedError: expected:<0> but was:<1>
>   at 
> org.apache.cassandra.db.compaction.CompactionsBytemanTest.testCompactingCFCounting(CompactionsBytemanTest.java:130)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$10.evaluate(BMUnitRunner.java:393)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$6.evaluate(BMUnitRunner.java:263)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:97)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch cassandra-4.0 updated: Flaky CompactionsBytemanTest

2021-12-08 Thread bereng
This is an automated email from the ASF dual-hosted git repository.

bereng pushed a commit to branch cassandra-4.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-4.0 by this push:
 new 8cef32a  Flaky CompactionsBytemanTest
8cef32a is described below

commit 8cef32ae8376d23828a20b861161bd0d3845456f
Author: Bereng 
AuthorDate: Fri Dec 3 14:25:10 2021 +0100

Flaky CompactionsBytemanTest

patch by Berenguer Blasi; reviewed by Andres de la Peña for CASSANDRA-17171
---
 .../org/apache/cassandra/db/compaction/CompactionsBytemanTest.java  | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/test/unit/org/apache/cassandra/db/compaction/CompactionsBytemanTest.java 
b/test/unit/org/apache/cassandra/db/compaction/CompactionsBytemanTest.java
index 95069f1..1c02699 100644
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsBytemanTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsBytemanTest.java
@@ -27,6 +27,7 @@ import java.util.stream.Collectors;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 
+import org.apache.cassandra.Util;
 import org.apache.cassandra.cql3.CQLTester;
 import org.apache.cassandra.db.ColumnFamilyStore;
 import org.apache.cassandra.db.Keyspace;
@@ -119,7 +120,7 @@ public class CompactionsBytemanTest extends CQLTester
 targetMethod = "submitBackground",
 targetLocation = "AT INVOKE 
java.util.concurrent.Future.isCancelled",
 condition = "!$cfs.keyspace.getName().contains(\"system\")",
-action = "Thread.sleep(1000)")
+action = "Thread.sleep(5000)")
 public void testCompactingCFCounting() throws Throwable
 {
 createTable("CREATE TABLE %s (k INT, c INT, v INT, PRIMARY KEY (k, 
c))");
@@ -127,9 +128,10 @@ public class CompactionsBytemanTest extends CQLTester
 cfs.enableAutoCompaction();
 
 execute("INSERT INTO %s (k, c, v) VALUES (?, ?, ?)", 0, 1, 1);
-assertEquals(0, CompactionManager.instance.compactingCF.count(cfs));
+Util.spinAssertEquals(true, () -> 
CompactionManager.instance.compactingCF.count(cfs) == 0, 5);
 cfs.forceBlockingFlush();
 
+Util.spinAssertEquals(true, () -> 
CompactionManager.instance.compactingCF.count(cfs) == 0, 5);
 
FBUtilities.waitOnFutures(CompactionManager.instance.submitBackground(cfs));
 assertEquals(0, CompactionManager.instance.compactingCF.count(cfs));
 }

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] 01/01: Merge branch 'cassandra-4.0' into trunk

2021-12-08 Thread bereng
This is an automated email from the ASF dual-hosted git repository.

bereng pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git

commit 507c6f76072fea09d62dd941c929945b2544bba9
Merge: d9460a0 8cef32a
Author: Bereng 
AuthorDate: Thu Dec 9 07:43:26 2021 +0100

Merge branch 'cassandra-4.0' into trunk

 .../apache/cassandra/db/compaction/CompactionsBytemanTest.java| 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --cc 
test/unit/org/apache/cassandra/db/compaction/CompactionsBytemanTest.java
index 95069f1,1c02699..baa8206
--- a/test/unit/org/apache/cassandra/db/compaction/CompactionsBytemanTest.java
+++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsBytemanTest.java
@@@ -18,9 -18,9 +18,9 @@@
  
  package org.apache.cassandra.db.compaction;
  
--import java.util.concurrent.TimeUnit;
  import java.util.Collection;
  import java.util.Collections;
++import java.util.concurrent.TimeUnit;
  import java.util.function.Consumer;
  import java.util.stream.Collectors;
  

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[cassandra] branch trunk updated (d9460a0 -> 507c6f7)

2021-12-08 Thread bereng
This is an automated email from the ASF dual-hosted git repository.

bereng pushed a change to branch trunk
in repository https://gitbox.apache.org/repos/asf/cassandra.git.


from d9460a0  Add non-blocking mode for CDC writes
 new 8cef32a  Flaky CompactionsBytemanTest
 new 507c6f7  Merge branch 'cassandra-4.0' into trunk

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../apache/cassandra/db/compaction/CompactionsBytemanTest.java| 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-17171) Flaky CompactionsBytemanTest

2021-12-08 Thread Berenguer Blasi (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17453068#comment-17453068
 ] 

Berenguer Blasi edited comment on CASSANDRA-17171 at 12/9/21, 6:34 AM:
---

Ok now we have 5K repeats on both trunk and 4.0. I think we're good to merge. 
Wdyt?


was (Author: bereng):
Ok now we have 5K repeats on both trunk and 4.0. I thunk we're good to merge. 
Wdyt?

> Flaky CompactionsBytemanTest
> 
>
> Key: CASSANDRA-17171
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17171
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Berenguer Blasi
>Assignee: Berenguer Blasi
>Priority: Normal
> Fix For: 4.0.x, 4.x
>
>
> See 
> [here|https://ci-cassandra.apache.org/job/Cassandra-trunk/868/testReport/junit/org.apache.cassandra.db.compaction/CompactionsBytemanTest/testCompactingCFCounting/]
> {noformat}
> junit.framework.AssertionFailedError: expected:<0> but was:<1>
>   at 
> org.apache.cassandra.db.compaction.CompactionsBytemanTest.testCompactingCFCounting(CompactionsBytemanTest.java:130)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$10.evaluate(BMUnitRunner.java:393)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$6.evaluate(BMUnitRunner.java:263)
>   at 
> org.jboss.byteman.contrib.bmunit.BMUnitRunner$1.evaluate(BMUnitRunner.java:97)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14898) Key cache loading is very slow when there are many SSTables

2021-12-08 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456145#comment-17456145
 ] 

Joey Lynch commented on CASSANDRA-14898:


Cherry picked the trunk branch back to 4.0 and applied a fixup or two to get 
[ae5954b1|https://github.com/jolynch/cassandra/commit/ae5954b1c513337484c43b575f09a0229464a33e].
  I've kicked off circleci precommit runs for 3.0, 3.11, 4.0 and trunk. If 
those come back green will commit.

> Key cache loading is very slow when there are many SSTables
> ---
>
> Key: CASSANDRA-14898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14898
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
> Environment: AWS i3.2xlarge, 4 physical cores (8 threads), 60GB of 
> RAM, loading about 8MB of KeyCache with 10k keys in it.
>Reporter: Joey Lynch
>Assignee: Venkata Harikrishna Nukala
>Priority: Low
>  Labels: Performance, low-hanging-fruit
> Attachments: key_cache_load_slow.svg
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While dealing with a production issue today where some 3.0.17 nodes had close 
> to ~8k sstables on disk due to excessive write pressure, we had a few nodes 
> crash due to OOM and then they took close to 17 minutes to load the key cache 
> and recover. This excessive key cache load significantly increased the 
> duration of the outage (to mitigate we just removed the saved key cache 
> files). For example here is one example taking 17 minutes to load 10k keys, 
> or about 10 keys per second (which is ... very slow):
> {noformat}
> INFO  [pool-3-thread-1] 2018-11-15 21:50:21,885 AutoSavingCache.java:190 - 
> reading saved cache /mnt/data/cassandra/saved_caches/KeyCache-d.db
> INFO  [pool-3-thread-1] 2018-11-15 22:07:16,490 AutoSavingCache.java:166 - 
> Completed loading (1014606 ms; 10103 keys) KeyCache cache
> {noformat}
> I've witnessed similar behavior in the past with large LCS clusters, and 
> indeed it appears that any time the number of sstables is large, KeyCache 
> loading takes a _really_ long time. Today I got a flame graph and I believe 
> that I found the issue and I think it's reasonably easy to fix. From what I 
> can tell the {{KeyCacheSerializer::deserialize}} [method 
> |https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L445]
>  which is called for every key is linear in the number of sstables due to the 
> [call|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L459]
>  to {{ColumnFamilyStore::getSSTables}} which ends up calling {{View::select}} 
> [here|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139].
>  The {{View::select}} call is linear in the number of sstables and causes a 
> _lot_ of {{HashSet}} 
> [resizing|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139]
>  when the number of sstables is much greater than 16 (the default size of the 
> backing {{HashMap}}).
> As we see in the attached flamegraph we spend 50% of our CPU time in these 
> {{getSSTable}} calls, of which 36% is spent adding sstables to the HashSet in 
> {{View::select}} and 17% is spent just iterating the sstables in the first 
> place. A full 16% of CPU time is spent _just resizing the HashMap_. Then 
> another 4% is spend calling {{CacheService::findDesc}} which does [a linear 
> search|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L475]
>  for the sstable generation.
> I believe that this affects at least Cassandra 3.0.17 and trunk, and could be 
> pretty easily fixed by either caching the getSSTables call or at the very 
> least pre-sizing the {{HashSet}} in {{View::select}} to be the size of the 
> sstables map.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17121) Allow column_index_size_in_kb to be configurable through nodetool

2021-12-08 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRA-17121:
--
Reviewers: Dinesh Joshi, Yifan Cai  (was: Dinesh Joshi)
   Status: Review In Progress  (was: Patch Available)

+1. Thanks for addressing the feedback. 

> Allow column_index_size_in_kb to be configurable through nodetool
> -
>
> Key: CASSANDRA-17121
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17121
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
>
> Configuring the column_index_size_in_kb setting requires a cassandra.yaml 
> change and bouncing the instance.
> Allowing column_index_size_in_kb to be configurable through nodetool can help 
> in the operational landscape.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17121) Allow column_index_size_in_kb to be configurable through nodetool

2021-12-08 Thread Yifan Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yifan Cai updated CASSANDRA-17121:
--
Status: Patch Available  (was: Open)

> Allow column_index_size_in_kb to be configurable through nodetool
> -
>
> Key: CASSANDRA-17121
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17121
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Tool/nodetool
>Reporter: Francisco Guerrero
>Assignee: Francisco Guerrero
>Priority: Normal
>
> Configuring the column_index_size_in_kb setting requires a cassandra.yaml 
> change and bouncing the instance.
> Allowing column_index_size_in_kb to be configurable through nodetool can help 
> in the operational landscape.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17195) Migrate thresholds for number of keyspaces and tables to guardrails

2021-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-17195:

Reviewers: Ekaterina Dimitrova
   Status: Review In Progress  (was: Patch Available)

> Migrate thresholds for number of keyspaces and tables to guardrails
> ---
>
> Key: CASSANDRA-17195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17195
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
>
> Migrate the existing thresholds for the number of keyspaces and tables:
> {code}
> # table_count_warn_threshold: 150
> # keyspace_count_warn_threshold: 40
> {code}
> to a new guardrail under the guardrails section, for example:
> {code}
> guardrails:
> keyspaces:
> warn_threshold: 40
> abort_threshold: -1
> tables:
> warn_threshold: 150
> abort_threshold: -1
> {code}
> Please note that CASSANDRA-17147 has already added a guardrail for the number 
> of tables, but the previous not-guardrail threshold for warning about the 
> number of tables still exists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17199) Provide summary of failed SessionInfo's in StreamResultFuture

2021-12-08 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-17199:
-
Change Category: Operability
 Complexity: Low Hanging Fruit
  Fix Version/s: 4.0.x
 Status: Open  (was: Triage Needed)

> Provide summary of failed SessionInfo's in StreamResultFuture
> -
>
> Key: CASSANDRA-17199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17199
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability/Logging
>Reporter: Brendan Cicchi
>Priority: Normal
> Fix For: 4.0.x
>
>
> Currently, we warn about the presence of one or more failed sessions existing 
> in the final state and then an operator/user traces back through the logs to 
> find any failed streams for troubleshooting.
> {code:java}
> private synchronized void maybeComplete()
> {
> if (finishedAllSessions())
> {
> StreamState finalState = getCurrentState();
> if (finalState.hasFailedSession())
> {
> logger.warn("[Stream #{}] Stream failed", planId);
> tryFailure(new StreamException(finalState, "Stream failed"));
> }
> else
> {
> logger.info("[Stream #{}] All sessions completed", planId);
> trySuccess(finalState);
> }
> }
> } {code}
> It would be helpful to log out a summary of the SessionInfo for each failed 
> session since that should be accessible via the StreamState.
>  
> This can be especially helpful for longer streaming operations like bootstrap 
> where the failure could have been a long time back and all recent streams 
> leading up to the warning actually are successful.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-10023) Emit a metric for number of local read and write calls

2021-12-08 Thread Stefan Miklosovic (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455959#comment-17455959
 ] 

Stefan Miklosovic commented on CASSANDRA-10023:
---

DTEST 
[https://app.circleci.com/pipelines/github/instaclustr/cassandra/606/workflows/b6038041-00e4-4cbb-9486-5446cccfcc0c/jobs/3056/steps]

DTEST branch: https://github.com/apache/cassandra-dtest/pull/170/files

> Emit a metric for number of local read and write calls
> --
>
> Key: CASSANDRA-10023
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10023
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability/Metrics
>Reporter: Sankalp Kohli
>Assignee: Stefan Miklosovic
>Priority: Low
>  Labels: 4.0-feature-freeze-review-requested, lhf
> Fix For: 4.x
>
> Attachments: 10023-trunk-dtests.txt, 10023-trunk.txt, 
> CASSANDRA-10023.patch
>
>
> Many C* drivers have feature to be replica aware and chose the co-ordinator 
> which is a replica. We should add a metric which tells us whether all calls 
> to the co-ordinator are replica aware.
> We have seen issues where client thinks they are replica aware when they 
> forget to add routing key at various places in the code. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17199) Provide summary of failed SessionInfo's in StreamResultFuture

2021-12-08 Thread Brendan Cicchi (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brendan Cicchi updated CASSANDRA-17199:
---
Summary: Provide summary of failed SessionInfo's in StreamResultFuture  
(was: Provide summary of failed SessionInfo's in a failed state 
StreamResultFuture)

> Provide summary of failed SessionInfo's in StreamResultFuture
> -
>
> Key: CASSANDRA-17199
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17199
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Observability/Logging
>Reporter: Brendan Cicchi
>Priority: Normal
>
> Currently, we warn about the presence of one or more failed sessions existing 
> in the final state and then an operator/user traces back through the logs to 
> find any failed streams for troubleshooting.
> {code:java}
> private synchronized void maybeComplete()
> {
> if (finishedAllSessions())
> {
> StreamState finalState = getCurrentState();
> if (finalState.hasFailedSession())
> {
> logger.warn("[Stream #{}] Stream failed", planId);
> tryFailure(new StreamException(finalState, "Stream failed"));
> }
> else
> {
> logger.info("[Stream #{}] All sessions completed", planId);
> trySuccess(finalState);
> }
> }
> } {code}
> It would be helpful to log out a summary of the SessionInfo for each failed 
> session since that should be accessible via the StreamState.
>  
> This can be especially helpful for longer streaming operations like bootstrap 
> where the failure could have been a long time back and all recent streams 
> leading up to the warning actually are successful.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17199) Provide summary of failed SessionInfo's in a failed state StreamResultFuture

2021-12-08 Thread Brendan Cicchi (Jira)
Brendan Cicchi created CASSANDRA-17199:
--

 Summary: Provide summary of failed SessionInfo's in a failed state 
StreamResultFuture
 Key: CASSANDRA-17199
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17199
 Project: Cassandra
  Issue Type: Improvement
  Components: Observability/Logging
Reporter: Brendan Cicchi


Currently, we warn about the presence of one or more failed sessions existing 
in the final state and then an operator/user traces back through the logs to 
find any failed streams for troubleshooting.
{code:java}
private synchronized void maybeComplete()
{
if (finishedAllSessions())
{
StreamState finalState = getCurrentState();
if (finalState.hasFailedSession())
{
logger.warn("[Stream #{}] Stream failed", planId);
tryFailure(new StreamException(finalState, "Stream failed"));
}
else
{
logger.info("[Stream #{}] All sessions completed", planId);
trySuccess(finalState);
}
}
} {code}
It would be helpful to log out a summary of the SessionInfo for each failed 
session since that should be accessible via the StreamState.

 

This can be especially helpful for longer streaming operations like bootstrap 
where the failure could have been a long time back and all recent streams 
leading up to the warning actually are successful.

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14898) Key cache loading is very slow when there are many SSTables

2021-12-08 Thread Marcus Eriksson (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455946#comment-17455946
 ] 

Marcus Eriksson commented on CASSANDRA-14898:
-

+1


> Key cache loading is very slow when there are many SSTables
> ---
>
> Key: CASSANDRA-14898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14898
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
> Environment: AWS i3.2xlarge, 4 physical cores (8 threads), 60GB of 
> RAM, loading about 8MB of KeyCache with 10k keys in it.
>Reporter: Joey Lynch
>Assignee: Venkata Harikrishna Nukala
>Priority: Low
>  Labels: Performance, low-hanging-fruit
> Attachments: key_cache_load_slow.svg
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While dealing with a production issue today where some 3.0.17 nodes had close 
> to ~8k sstables on disk due to excessive write pressure, we had a few nodes 
> crash due to OOM and then they took close to 17 minutes to load the key cache 
> and recover. This excessive key cache load significantly increased the 
> duration of the outage (to mitigate we just removed the saved key cache 
> files). For example here is one example taking 17 minutes to load 10k keys, 
> or about 10 keys per second (which is ... very slow):
> {noformat}
> INFO  [pool-3-thread-1] 2018-11-15 21:50:21,885 AutoSavingCache.java:190 - 
> reading saved cache /mnt/data/cassandra/saved_caches/KeyCache-d.db
> INFO  [pool-3-thread-1] 2018-11-15 22:07:16,490 AutoSavingCache.java:166 - 
> Completed loading (1014606 ms; 10103 keys) KeyCache cache
> {noformat}
> I've witnessed similar behavior in the past with large LCS clusters, and 
> indeed it appears that any time the number of sstables is large, KeyCache 
> loading takes a _really_ long time. Today I got a flame graph and I believe 
> that I found the issue and I think it's reasonably easy to fix. From what I 
> can tell the {{KeyCacheSerializer::deserialize}} [method 
> |https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L445]
>  which is called for every key is linear in the number of sstables due to the 
> [call|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L459]
>  to {{ColumnFamilyStore::getSSTables}} which ends up calling {{View::select}} 
> [here|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139].
>  The {{View::select}} call is linear in the number of sstables and causes a 
> _lot_ of {{HashSet}} 
> [resizing|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139]
>  when the number of sstables is much greater than 16 (the default size of the 
> backing {{HashMap}}).
> As we see in the attached flamegraph we spend 50% of our CPU time in these 
> {{getSSTable}} calls, of which 36% is spent adding sstables to the HashSet in 
> {{View::select}} and 17% is spent just iterating the sstables in the first 
> place. A full 16% of CPU time is spent _just resizing the HashMap_. Then 
> another 4% is spend calling {{CacheService::findDesc}} which does [a linear 
> search|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L475]
>  for the sstable generation.
> I believe that this affects at least Cassandra 3.0.17 and trunk, and could be 
> pretty easily fixed by either caching the getSSTables call or at the very 
> least pre-sizing the {{HashSet}} in {{View::select}} to be the size of the 
> sstables map.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-14898) Key cache loading is very slow when there are many SSTables

2021-12-08 Thread Venkata Harikrishna Nukala (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455938#comment-17455938
 ] 

Venkata Harikrishna Nukala commented on CASSANDRA-14898:


[~jolynch]  [~marcuse]  Addressed review comments, squashed the commits for 
each branch. 

> Key cache loading is very slow when there are many SSTables
> ---
>
> Key: CASSANDRA-14898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14898
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
> Environment: AWS i3.2xlarge, 4 physical cores (8 threads), 60GB of 
> RAM, loading about 8MB of KeyCache with 10k keys in it.
>Reporter: Joey Lynch
>Assignee: Venkata Harikrishna Nukala
>Priority: Low
>  Labels: Performance, low-hanging-fruit
> Attachments: key_cache_load_slow.svg
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> While dealing with a production issue today where some 3.0.17 nodes had close 
> to ~8k sstables on disk due to excessive write pressure, we had a few nodes 
> crash due to OOM and then they took close to 17 minutes to load the key cache 
> and recover. This excessive key cache load significantly increased the 
> duration of the outage (to mitigate we just removed the saved key cache 
> files). For example here is one example taking 17 minutes to load 10k keys, 
> or about 10 keys per second (which is ... very slow):
> {noformat}
> INFO  [pool-3-thread-1] 2018-11-15 21:50:21,885 AutoSavingCache.java:190 - 
> reading saved cache /mnt/data/cassandra/saved_caches/KeyCache-d.db
> INFO  [pool-3-thread-1] 2018-11-15 22:07:16,490 AutoSavingCache.java:166 - 
> Completed loading (1014606 ms; 10103 keys) KeyCache cache
> {noformat}
> I've witnessed similar behavior in the past with large LCS clusters, and 
> indeed it appears that any time the number of sstables is large, KeyCache 
> loading takes a _really_ long time. Today I got a flame graph and I believe 
> that I found the issue and I think it's reasonably easy to fix. From what I 
> can tell the {{KeyCacheSerializer::deserialize}} [method 
> |https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L445]
>  which is called for every key is linear in the number of sstables due to the 
> [call|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L459]
>  to {{ColumnFamilyStore::getSSTables}} which ends up calling {{View::select}} 
> [here|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139].
>  The {{View::select}} call is linear in the number of sstables and causes a 
> _lot_ of {{HashSet}} 
> [resizing|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139]
>  when the number of sstables is much greater than 16 (the default size of the 
> backing {{HashMap}}).
> As we see in the attached flamegraph we spend 50% of our CPU time in these 
> {{getSSTable}} calls, of which 36% is spent adding sstables to the HashSet in 
> {{View::select}} and 17% is spent just iterating the sstables in the first 
> place. A full 16% of CPU time is spent _just resizing the HashMap_. Then 
> another 4% is spend calling {{CacheService::findDesc}} which does [a linear 
> search|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L475]
>  for the sstable generation.
> I believe that this affects at least Cassandra 3.0.17 and trunk, and could be 
> pretty easily fixed by either caching the getSSTables call or at the very 
> least pre-sizing the {{HashSet}} in {{View::select}} to be the size of the 
> sstables map.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17189) Guardrail for page size

2021-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455915#comment-17455915
 ] 

Brandon Williams commented on CASSANDRA-17189:
--

Assigned to you, please do let us know if you need help, and good luck!

> Guardrail for page size
> ---
>
> Key: CASSANDRA-17189
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17189
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Andres de la Peña
>Assignee: Bartlomiej
>Priority: Normal
>  Labels: AdventCalendar2021, lhf
> Fix For: 4.1
>
>
> Add guardrail limiting the query page size, for example:
> {code}
> # Guardrail to warn about or reject page sizes greater than threshold.
> # The two thresholds default to -1 to disable.
> page_size:
> warn_threshold: -1
> abort_threshold: -1
> {code}
> Initially this can be based on the specified number of rows used as page 
> size, although it would be ideal to also limit the actual size in bytes of 
> the returned pages.
> +Additional information for newcomers:+
> # Add the configuration for the new guardrail on page size in the guardrails 
> section of cassandra.yaml.
> # Add a getPageSize method in GuardrailsConfig returning a Threshold.Config 
> object
> # Implement that method in GuardrailsOptions, which is the default yaml-based 
> implementation of GuardrailsConfig
> # Add a Threshold guardrail named pageSize in Guardrails, using the 
> previously created config
> # Define JMX-friendly getters and setters for the previously created config 
> in GuardrailsMBean
> # Implement the JMX-friendly getters and setters in Guardrails
> # Now that we have the guardrail ready, it’s time to use it. We should search 
> for a place to invoke the Guardrails.pageSize#guard method with the page size 
> that each query is going to use. The DataLimits#forPaging methods look like 
> good candidates for this.
> # Finally, add some tests for the new guardrail. Given that the new guardrail 
> is a Threshold, our new test should probably extend ThresholdTester.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Assigned] (CASSANDRA-17189) Guardrail for page size

2021-12-08 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams reassigned CASSANDRA-17189:


Assignee: Bartlomiej

> Guardrail for page size
> ---
>
> Key: CASSANDRA-17189
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17189
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Andres de la Peña
>Assignee: Bartlomiej
>Priority: Normal
>  Labels: AdventCalendar2021, lhf
> Fix For: 4.1
>
>
> Add guardrail limiting the query page size, for example:
> {code}
> # Guardrail to warn about or reject page sizes greater than threshold.
> # The two thresholds default to -1 to disable.
> page_size:
> warn_threshold: -1
> abort_threshold: -1
> {code}
> Initially this can be based on the specified number of rows used as page 
> size, although it would be ideal to also limit the actual size in bytes of 
> the returned pages.
> +Additional information for newcomers:+
> # Add the configuration for the new guardrail on page size in the guardrails 
> section of cassandra.yaml.
> # Add a getPageSize method in GuardrailsConfig returning a Threshold.Config 
> object
> # Implement that method in GuardrailsOptions, which is the default yaml-based 
> implementation of GuardrailsConfig
> # Add a Threshold guardrail named pageSize in Guardrails, using the 
> previously created config
> # Define JMX-friendly getters and setters for the previously created config 
> in GuardrailsMBean
> # Implement the JMX-friendly getters and setters in Guardrails
> # Now that we have the guardrail ready, it’s time to use it. We should search 
> for a place to invoke the Guardrails.pageSize#guard method with the page size 
> that each query is going to use. The DataLimits#forPaging methods look like 
> good candidates for this.
> # Finally, add some tests for the new guardrail. Given that the new guardrail 
> is a Threshold, our new test should probably extend ThresholdTester.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17189) Guardrail for page size

2021-12-08 Thread Bartlomiej (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455913#comment-17455913
 ] 

Bartlomiej commented on CASSANDRA-17189:


Hi,

I would like to try implement this (hope it will not overwhelm me :D ),

thanks !

> Guardrail for page size
> ---
>
> Key: CASSANDRA-17189
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17189
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Andres de la Peña
>Priority: Normal
>  Labels: AdventCalendar2021, lhf
> Fix For: 4.1
>
>
> Add guardrail limiting the query page size, for example:
> {code}
> # Guardrail to warn about or reject page sizes greater than threshold.
> # The two thresholds default to -1 to disable.
> page_size:
> warn_threshold: -1
> abort_threshold: -1
> {code}
> Initially this can be based on the specified number of rows used as page 
> size, although it would be ideal to also limit the actual size in bytes of 
> the returned pages.
> +Additional information for newcomers:+
> # Add the configuration for the new guardrail on page size in the guardrails 
> section of cassandra.yaml.
> # Add a getPageSize method in GuardrailsConfig returning a Threshold.Config 
> object
> # Implement that method in GuardrailsOptions, which is the default yaml-based 
> implementation of GuardrailsConfig
> # Add a Threshold guardrail named pageSize in Guardrails, using the 
> previously created config
> # Define JMX-friendly getters and setters for the previously created config 
> in GuardrailsMBean
> # Implement the JMX-friendly getters and setters in Guardrails
> # Now that we have the guardrail ready, it’s time to use it. We should search 
> for a place to invoke the Guardrails.pageSize#guard method with the page size 
> that each query is going to use. The DataLimits#forPaging methods look like 
> good candidates for this.
> # Finally, add some tests for the new guardrail. Given that the new guardrail 
> is a Threshold, our new test should probably extend ThresholdTester.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14898) Key cache loading is very slow when there are many SSTables

2021-12-08 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455898#comment-17455898
 ] 

Joey Lynch edited comment on CASSANDRA-14898 at 12/8/21, 5:38 PM:
--

+1 . Left a few minor suggestions on the 3.0 PR (apply to other branches as 
well).

When you're ready please rebase to a single commit for each branch with a 
commit that looks something like
{noformat}
$ git log -1
commit ...
Author: Venkata Harikrishna Nukala 
Date: ...

Fix slow keycache load which blocks startup for tables with many sstables. 

Patch by Venkata Harikrishna Nukala; reviewed by Marcus Eriksson and Joseph 
Lynch for CASSANDRA-14898 {noformat}
Important part is your name is attributed as author (if you want it to be), and 
we note the patch author, reviewers and ticket in the last line.


was (Author: jolynch):
+1 . Left a few minor suggestions on the 3.0 PR (apply to other branches as 
well), +1.

When you're ready please rebase to a single commit for each branch with a 
commit that looks something like
{noformat}
$ git log -1
commit ...
Author: Venkata Harikrishna Nukala 
Date: ...

Fix slow keycache load which blocks startup for tables with many sstables. 

Patch by Venkata Harikrishna Nukala; reviewed by Marcus Eriksson and Joseph 
Lynch for CASSANDRA-14898 {noformat}
Important part is your name is attributed as author (if you want it to be), and 
we note the patch author, reviewers and ticket in the last line.

> Key cache loading is very slow when there are many SSTables
> ---
>
> Key: CASSANDRA-14898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14898
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
> Environment: AWS i3.2xlarge, 4 physical cores (8 threads), 60GB of 
> RAM, loading about 8MB of KeyCache with 10k keys in it.
>Reporter: Joey Lynch
>Assignee: Venkata Harikrishna Nukala
>Priority: Low
>  Labels: Performance, low-hanging-fruit
> Attachments: key_cache_load_slow.svg
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> While dealing with a production issue today where some 3.0.17 nodes had close 
> to ~8k sstables on disk due to excessive write pressure, we had a few nodes 
> crash due to OOM and then they took close to 17 minutes to load the key cache 
> and recover. This excessive key cache load significantly increased the 
> duration of the outage (to mitigate we just removed the saved key cache 
> files). For example here is one example taking 17 minutes to load 10k keys, 
> or about 10 keys per second (which is ... very slow):
> {noformat}
> INFO  [pool-3-thread-1] 2018-11-15 21:50:21,885 AutoSavingCache.java:190 - 
> reading saved cache /mnt/data/cassandra/saved_caches/KeyCache-d.db
> INFO  [pool-3-thread-1] 2018-11-15 22:07:16,490 AutoSavingCache.java:166 - 
> Completed loading (1014606 ms; 10103 keys) KeyCache cache
> {noformat}
> I've witnessed similar behavior in the past with large LCS clusters, and 
> indeed it appears that any time the number of sstables is large, KeyCache 
> loading takes a _really_ long time. Today I got a flame graph and I believe 
> that I found the issue and I think it's reasonably easy to fix. From what I 
> can tell the {{KeyCacheSerializer::deserialize}} [method 
> |https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L445]
>  which is called for every key is linear in the number of sstables due to the 
> [call|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L459]
>  to {{ColumnFamilyStore::getSSTables}} which ends up calling {{View::select}} 
> [here|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139].
>  The {{View::select}} call is linear in the number of sstables and causes a 
> _lot_ of {{HashSet}} 
> [resizing|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139]
>  when the number of sstables is much greater than 16 (the default size of the 
> backing {{HashMap}}).
> As we see in the attached flamegraph we spend 50% of our CPU time in these 
> {{getSSTable}} calls, of which 36% is spent adding sstables to the HashSet in 
> {{View::select}} and 17% is spent just iterating the sstables in the first 
> place. A full 16% of CPU time is spent _just resizing the HashMap_. Then 
> another 4% is spend calling {{CacheService::findDesc}} which does [a linear 
> 

[jira] [Commented] (CASSANDRA-14898) Key cache loading is very slow when there are many SSTables

2021-12-08 Thread Joey Lynch (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455898#comment-17455898
 ] 

Joey Lynch commented on CASSANDRA-14898:


+1 . Left a few minor suggestions on the 3.0 PR (apply to other branches as 
well), +1.

When you're ready please rebase to a single commit for each branch with a 
commit that looks something like
{noformat}
$ git log -1
commit ...
Author: Venkata Harikrishna Nukala 
Date: ...

Fix slow keycache load which blocks startup for tables with many sstables. 

Patch by Venkata Harikrishna Nukala; reviewed by Marcus Eriksson and Joseph 
Lynch for CASSANDRA-14898 {noformat}
Important part is your name is attributed as author (if you want it to be), and 
we note the patch author, reviewers and ticket in the last line.

> Key cache loading is very slow when there are many SSTables
> ---
>
> Key: CASSANDRA-14898
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14898
> Project: Cassandra
>  Issue Type: Bug
>  Components: Legacy/Local Write-Read Paths
> Environment: AWS i3.2xlarge, 4 physical cores (8 threads), 60GB of 
> RAM, loading about 8MB of KeyCache with 10k keys in it.
>Reporter: Joey Lynch
>Assignee: Venkata Harikrishna Nukala
>Priority: Low
>  Labels: Performance, low-hanging-fruit
> Attachments: key_cache_load_slow.svg
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> While dealing with a production issue today where some 3.0.17 nodes had close 
> to ~8k sstables on disk due to excessive write pressure, we had a few nodes 
> crash due to OOM and then they took close to 17 minutes to load the key cache 
> and recover. This excessive key cache load significantly increased the 
> duration of the outage (to mitigate we just removed the saved key cache 
> files). For example here is one example taking 17 minutes to load 10k keys, 
> or about 10 keys per second (which is ... very slow):
> {noformat}
> INFO  [pool-3-thread-1] 2018-11-15 21:50:21,885 AutoSavingCache.java:190 - 
> reading saved cache /mnt/data/cassandra/saved_caches/KeyCache-d.db
> INFO  [pool-3-thread-1] 2018-11-15 22:07:16,490 AutoSavingCache.java:166 - 
> Completed loading (1014606 ms; 10103 keys) KeyCache cache
> {noformat}
> I've witnessed similar behavior in the past with large LCS clusters, and 
> indeed it appears that any time the number of sstables is large, KeyCache 
> loading takes a _really_ long time. Today I got a flame graph and I believe 
> that I found the issue and I think it's reasonably easy to fix. From what I 
> can tell the {{KeyCacheSerializer::deserialize}} [method 
> |https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L445]
>  which is called for every key is linear in the number of sstables due to the 
> [call|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L459]
>  to {{ColumnFamilyStore::getSSTables}} which ends up calling {{View::select}} 
> [here|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139].
>  The {{View::select}} call is linear in the number of sstables and causes a 
> _lot_ of {{HashSet}} 
> [resizing|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/db/lifecycle/View.java#L139]
>  when the number of sstables is much greater than 16 (the default size of the 
> backing {{HashMap}}).
> As we see in the attached flamegraph we spend 50% of our CPU time in these 
> {{getSSTable}} calls, of which 36% is spent adding sstables to the HashSet in 
> {{View::select}} and 17% is spent just iterating the sstables in the first 
> place. A full 16% of CPU time is spent _just resizing the HashMap_. Then 
> another 4% is spend calling {{CacheService::findDesc}} which does [a linear 
> search|https://github.com/apache/cassandra/blob/06209037ea56b5a2a49615a99f1542d6ea1b2947/src/java/org/apache/cassandra/service/CacheService.java#L475]
>  for the sstable generation.
> I believe that this affects at least Cassandra 3.0.17 and trunk, and could be 
> pretty easily fixed by either caching the getSSTables call or at the very 
> least pre-sizing the {{HashSet}} in {{View::select}} to be the size of the 
> sstables map.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-11418) Nodetool status should reflect hibernate/replacing states

2021-12-08 Thread Brandon Williams (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Williams updated CASSANDRA-11418:
-
Status: Open  (was: Patch Available)

> Nodetool status should reflect hibernate/replacing states
> -
>
> Key: CASSANDRA-11418
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11418
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Legacy/Observability, Tool/nodetool
>Reporter: Joel Knighton
>Assignee: Brandon Williams
>Priority: Low
> Fix For: 4.x
>
> Attachments: cassandra-11418-trunk
>
>
> Currently, the four options for state in nodetool status are 
> joining/leaving/moving/normal.
> Joining nodes are determined based on bootstrap tokens, leaving nodes are 
> based on leaving endpoints in TokenMetadata, moving nodes are based on moving 
> endpoints in TokenMetadata.
> This means that a node will appear in normal state when going through a 
> bootstrap with flag replace_address, which can be confusing to operators.
> We should add another state for hibernation/replacing to make this visible. 
> This will require a way to get a list of all hibernating endpoints.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-17084) startup fails if directories do not exist

2021-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454888#comment-17454888
 ] 

Brandon Williams edited comment on CASSANDRA-17084 at 12/8/21, 4:51 PM:


PathUtils.tryOnFileStore was already attempting to copy the previous behavior, 
but was using relative paths so they never existed. 
[Branch|https://github.com/driftx/cassandra/tree/CASSANDRA-17084], 
[circle|https://app.circleci.com/pipelines/github/driftx/cassandra?branch=CASSANDRA-17084],
 
[!https://ci-cassandra.apache.org/job/Cassandra-devbranch/1314/badge/icon!|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/1314/pipeline]
 with this [dtest 
branch|https://github.com/driftx/cassandra-dtest/tree/CASSANDRA-17084].


was (Author: brandon.williams):
PathUtils.tryOnFileStore was already attempting to copy the previous behavior, 
but was using relative paths so they never existed. 
[Branch|https://github.com/driftx/cassandra/tree/CASSANDRA-17084], 
[circle|https://app.circleci.com/pipelines/github/driftx/cassandra?branch=CASSANDRA-17084],
 
[!https://ci-cassandra.apache.org/job/Cassandra-devbranch/1313/badge/icon!|https://ci-cassandra.apache.org/blue/organizations/jenkins/Cassandra-devbranch/detail/Cassandra-devbranch/1313/pipeline]
 with this [dtest 
branch|https://github.com/driftx/cassandra-dtest/tree/CASSANDRA-17084].

> startup fails if directories do not exist
> -
>
> Key: CASSANDRA-17084
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17084
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Startup and Shutdown
>Reporter: Brandon Williams
>Assignee: Brandon Williams
>Priority: Normal
> Fix For: 4.1
>
>
> Prior to CASSANDRA-16926, having commitlog and data dirs defined that did not 
> exist would be created on startup, but now we throw:
> {noformat}
> Exception (org.apache.cassandra.exceptions.ConfigurationException) 
> encountered during startup: Unable check disk space in 
> 'bin/../data/commitlog'. Perhaps the Cassandra user does not have the 
> necessary permissions
> org.apache.cassandra.exceptions.ConfigurationException: Unable check disk 
> space in 'bin/../data/commitlog'. Perhaps the Cassandra user does not have 
> the necessary permissions
> at 
> org.apache.cassandra.config.DatabaseDescriptor.lambda$tryGetSpace$3(DatabaseDescriptor.java:1188)
> at 
> org.apache.cassandra.io.util.PathUtils.tryOnFileStore(PathUtils.java:639)
> at 
> org.apache.cassandra.io.util.PathUtils.tryGetSpace(PathUtils.java:665)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.tryGetSpace(DatabaseDescriptor.java:1188)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.applySimpleConfig(DatabaseDescriptor.java:553)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.applyAll(DatabaseDescriptor.java:350)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:178)
> at 
> org.apache.cassandra.config.DatabaseDescriptor.daemonInitialization(DatabaseDescriptor.java:162)
> at 
> org.apache.cassandra.service.CassandraDaemon.applyConfig(CassandraDaemon.java:800)
> at 
> org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:736)
> at 
> org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:871)
> {noformat}
> This was at least convenient for development, but also may be relied upon by 
> some tooling/automation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17195) Migrate thresholds for number of keyspaces and tables to guardrails

2021-12-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17195:
--
Test and Documentation Plan: New tests are included
 Status: Patch Available  (was: In Progress)

> Migrate thresholds for number of keyspaces and tables to guardrails
> ---
>
> Key: CASSANDRA-17195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17195
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
>
> Migrate the existing thresholds for the number of keyspaces and tables:
> {code}
> # table_count_warn_threshold: 150
> # keyspace_count_warn_threshold: 40
> {code}
> to a new guardrail under the guardrails section, for example:
> {code}
> guardrails:
> keyspaces:
> warn_threshold: 40
> abort_threshold: -1
> tables:
> warn_threshold: 150
> abort_threshold: -1
> {code}
> Please note that CASSANDRA-17147 has already added a guardrail for the number 
> of tables, but the previous not-guardrail threshold for warning about the 
> number of tables still exists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17195) Migrate thresholds for number of keyspaces and tables to guardrails

2021-12-08 Thread Jira


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455371#comment-17455371
 ] 

Andres de la Peña commented on CASSANDRA-17195:
---

||PR||CI||
|[trunk|https://github.com/apache/cassandra/pull/1356]|[j8|https://app.circleci.com/pipelines/github/adelapena/cassandra/1207/workflows/9f8f6e04-426e-4454-b65f-d6cb84c8cbdf]
 
[j11|https://app.circleci.com/pipelines/github/adelapena/cassandra/1207/workflows/86bfc98a-fa66-40ff-a996-45323b2b8a3f]|

The proposed PR adds a new {{keyspaces}} guardrail. The guardrail for the 
number of tables was already added by CASSANDRA-17147.

As for migrating from [the equivalent 
thresholds|https://github.com/apache/cassandra/blob/cassandra-4.0.1/conf/cassandra.yaml#L1416-L1420]
 that were added by CASSANDRA-16309 in 4.0-beta4, I have left them marked as 
deprecated so we can drop them in the next major.

The PR includes some little changes in {{GuardrailOptions}} to ensure that the 
error messages thrown by config validation consistently use the flat version of 
the names of those properties in {{{}cassandra.yaml{}}}, e.g. 
{{{}guardrails.tables.warn_threshold{}}}. These names will be used in errors 
thrown during startup, when setting the properties through JMX, and probably in 
the future when we add configuration through virtual tables.

I have also done some minor refactoring moving the interfaces 
{{Threshold.Config}} and {{Values.Config}} into the {{GuardrailsConfig}} 
interface. So now we have {{GuardrailsConfig.IntThreshold}} and 
{{GuardrailsConfig.TableProperties}} instead. That decouples the general 
guardrail classes ({{{}Threshold{}}} and {{{}Values{}}}) from the particular 
configuration of their particular instances, and we use the {{Guardrails}} 
entry point to link them. This will be useful when we add new guardrails for 
different data types, such as for example the guardrail for column size, which 
will probably need methods with a different signature than the current ones in 
{{{}Threshold.Config{}}}, such as 
{{{}getWarnThresholdInKb{}}}/{{{}getAbortThresholdInKb{}}}. With this change 
the {{Threshold}} and {{Values}} classes will be isolated from those new 
configs.

Finally, I have observed that the current warn thresholds for the number of 
keyspaces and tables log [{{INFO}} 
messages|https://github.com/apache/cassandra/blob/31bea0b0d41e4e81095f0d088094f03db14af490/src/java/org/apache/cassandra/service/StorageService.java#L6180]
 when the threshold is updated. {{TrackWarnings}} [does the 
same|https://github.com/apache/cassandra/blob/31bea0b0d41e4e81095f0d088094f03db14af490/src/java/org/apache/cassandra/service/StorageService.java#L6210],
 although with a slightly different format. This is a nice feature that we miss 
in guardrails so I'm adding it. I have centralized this type of logging into [a 
single 
method|https://github.com/apache/cassandra/blob/6b8bb45b4121e75396ee31d3cea9006ff5bd47a6/src/java/org/apache/cassandra/config/GuardrailsOptions.java#L144-L152]
 so we can be sure that the format of the logged messages is always the same.

> Migrate thresholds for number of keyspaces and tables to guardrails
> ---
>
> Key: CASSANDRA-17195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17195
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
>
> Migrate the existing thresholds for the number of keyspaces and tables:
> {code}
> # table_count_warn_threshold: 150
> # keyspace_count_warn_threshold: 40
> {code}
> to a new guardrail under the guardrails section, for example:
> {code}
> guardrails:
> keyspaces:
> warn_threshold: 40
> abort_threshold: -1
> tables:
> warn_threshold: 150
> abort_threshold: -1
> {code}
> Please note that CASSANDRA-17147 has already added a guardrail for the number 
> of tables, but the previous not-guardrail threshold for warning about the 
> number of tables still exists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17195) Migrate thresholds for number of keyspaces and tables to guardrails

2021-12-08 Thread Jira


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andres de la Peña updated CASSANDRA-17195:
--
Change Category: Semantic
 Complexity: Normal
 Status: Open  (was: Triage Needed)

> Migrate thresholds for number of keyspaces and tables to guardrails
> ---
>
> Key: CASSANDRA-17195
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17195
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Andres de la Peña
>Assignee: Andres de la Peña
>Priority: Normal
>
> Migrate the existing thresholds for the number of keyspaces and tables:
> {code}
> # table_count_warn_threshold: 150
> # keyspace_count_warn_threshold: 40
> {code}
> to a new guardrail under the guardrails section, for example:
> {code}
> guardrails:
> keyspaces:
> warn_threshold: 40
> abort_threshold: -1
> tables:
> warn_threshold: 150
> abort_threshold: -1
> {code}
> Please note that CASSANDRA-17147 has already added a guardrail for the number 
> of tables, but the previous not-guardrail threshold for warning about the 
> number of tables still exists.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17198) Allow to filter using LIKE predicates

2021-12-08 Thread Benjamin Lerer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-17198:
---
Change Category: Semantic
 Complexity: Low Hanging Fruit
  Fix Version/s: 4.x
 Mentor: Benjamin Lerer

> Allow to filter using LIKE predicates
> -
>
> Key: CASSANDRA-17198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17198
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Syntax
>Reporter: Benjamin Lerer
>Priority: Normal
>  Labels: AdventCalendar2021, lhf
> Fix For: 4.x
>
>
> {{LIKE}} predicates can only be used with the SASI indices. In several 
> usecases (e.g. querying the {{settings}} virtual table) it makes sense to 
> support them for filtering.
> + Additional information for newcomers:+
> There are some checks in the {{StatementRestrictions}} constructor and on 
> {{LikeRestriction}} that need to be removed for allowing filtering using LIKE 
> on clustering and regular columns.
> For filtering on partition columns the {{needFiltering}} methods in 
> {{PartitionKeySingleRestrictionSet}} will need to be modified to return true 
> when LIKE predicate are used.
> The unit tests should go in {{SelectTest}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-17198) Allow to filter using LIKE predicates

2021-12-08 Thread Benjamin Lerer (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Lerer updated CASSANDRA-17198:
---
Labels: AdventCalendar2021 lhf  (was: )

> Allow to filter using LIKE predicates
> -
>
> Key: CASSANDRA-17198
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17198
> Project: Cassandra
>  Issue Type: Improvement
>  Components: CQL/Syntax
>Reporter: Benjamin Lerer
>Priority: Normal
>  Labels: AdventCalendar2021, lhf
>
> {{LIKE}} predicates can only be used with the SASI indices. In several 
> usecases (e.g. querying the {{settings}} virtual table) it makes sense to 
> support them for filtering.
> + Additional information for newcomers:+
> There are some checks in the {{StatementRestrictions}} constructor and on 
> {{LikeRestriction}} that need to be removed for allowing filtering using LIKE 
> on clustering and regular columns.
> For filtering on partition columns the {{needFiltering}} methods in 
> {{PartitionKeySingleRestrictionSet}} will need to be modified to return true 
> when LIKE predicate are used.
> The unit tests should go in {{SelectTest}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-17198) Allow to filter using LIKE predicates

2021-12-08 Thread Benjamin Lerer (Jira)
Benjamin Lerer created CASSANDRA-17198:
--

 Summary: Allow to filter using LIKE predicates
 Key: CASSANDRA-17198
 URL: https://issues.apache.org/jira/browse/CASSANDRA-17198
 Project: Cassandra
  Issue Type: Improvement
  Components: CQL/Syntax
Reporter: Benjamin Lerer


{{LIKE}} predicates can only be used with the SASI indices. In several usecases 
(e.g. querying the {{settings}} virtual table) it makes sense to support them 
for filtering.

+ Additional information for newcomers:+

There are some checks in the {{StatementRestrictions}} constructor and on 
{{LikeRestriction}} that need to be removed for allowing filtering using LIKE 
on clustering and regular columns.
For filtering on partition columns the {{needFiltering}} methods in 
{{PartitionKeySingleRestrictionSet}} will need to be modified to return true 
when LIKE predicate are used.
The unit tests should go in {{SelectTest}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-15234) Standardise config and JVM parameters

2021-12-08 Thread Ekaterina Dimitrova (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekaterina Dimitrova updated CASSANDRA-15234:

Test and Documentation Plan: 
[trunk|https://github.com/apache/cassandra/pull/1350] | 
[dtest|https://github.com/apache/cassandra-dtest/pull/169] | 
[ccm|https://github.com/ekaterinadimitrova2/ccm/pull/1] 

The existing tests plus new unit tests added. Also, dtests exercise the 
backward compatibility and test that way that ccm supports the same behavior as 
Cassandra as regards to configuration parameters loading. 

  was:
[trunk|https://github.com/apache/cassandra/compare/trunk...ekaterinadimitrova2:CASSANDRA-15234-take2?expand=1]
 | [dtest|https://github.com/apache/cassandra-dtest/pull/169] | 
[ccm|https://github.com/ekaterinadimitrova2/ccm/pull/1] 

The existing tests plus new unit tests added. Also, dtests exercise the 
backward compatibility and test that way that ccm supports the same behavior as 
Cassandra as regards to configuration parameters loading. 


> Standardise config and JVM parameters
> -
>
> Key: CASSANDRA-15234
> URL: https://issues.apache.org/jira/browse/CASSANDRA-15234
> Project: Cassandra
>  Issue Type: Bug
>  Components: Local/Config
>Reporter: Benedict Elliott Smith
>Assignee: Ekaterina Dimitrova
>Priority: Normal
> Fix For: 5.x
>
> Attachments: CASSANDRA-15234-3-DTests-JAVA8.txt
>
>
> We have a bunch of inconsistent names and config patterns in the codebase, 
> both from the yams and JVM properties.  It would be nice to standardise the 
> naming (such as otc_ vs internode_) as well as the provision of values with 
> units - while maintaining perpetual backwards compatibility with the old 
> parameter names, of course.
> For temporal units, I would propose parsing strings with suffixes of:
> {{code}}
> u|micros(econds?)?
> ms|millis(econds?)?
> s(econds?)?
> m(inutes?)?
> h(ours?)?
> d(ays?)?
> mo(nths?)?
> {{code}}
> For rate units, I would propose parsing any of the standard {{B/s, KiB/s, 
> MiB/s, GiB/s, TiB/s}}.
> Perhaps for avoiding ambiguity we could not accept bauds {{bs, Mbps}} or 
> powers of 1000 such as {{KB/s}}, given these are regularly used for either 
> their old or new definition e.g. {{KiB/s}}, or we could support them and 
> simply log the value in bytes/s.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17140) Broken test_rolling_upgrade - upgrade_tests.upgrade_through_versions_test.TestUpgrade_indev_3_0_x_To_indev_4_0_x

2021-12-08 Thread Ekaterina Dimitrova (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455267#comment-17455267
 ] 

Ekaterina Dimitrova commented on CASSANDRA-17140:
-

[~bereng] , we have two runs on the same commit showing the same failure. Hope 
you saw the links I posted.

I saw the good commit you had in your bisect was a commit presented only on 
newer branches. Can you, please, run the tests on the three commits I pointed 
to? Thank you in advance

> Broken test_rolling_upgrade - 
> upgrade_tests.upgrade_through_versions_test.TestUpgrade_indev_3_0_x_To_indev_4_0_x
> 
>
> Key: CASSANDRA-17140
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17140
> Project: Cassandra
>  Issue Type: Bug
>  Components: CI
>Reporter: Yifan Cai
>Priority: Normal
> Fix For: 4.0.x
>
>
> The tests "test_rolling_upgrade" fail with the below error. 
>  
> [https://app.circleci.com/pipelines/github/yifan-c/cassandra/279/workflows/6340cd42-0b27-42c2-8418-9f8b56c57bea/jobs/1990]
>  
> I am able to alway produce it by running the test locally too. 
> {{$ pytest --execute-upgrade-tests-only --upgrade-target-version-only 
> --upgrade-version-selection all --cassandra-version=4.0 
> upgrade_tests/upgrade_through_versions_test.py::TestUpgrade_indev_3_11_x_To_indev_4_0_x::test_rolling_upgrade}}
>  
> {code:java}
> self = 
>   object at 0x7ffba4242fd0>
> subprocs = [, 
> ]
> def _check_on_subprocs(self, subprocs):
> """
> Check on given subprocesses.
> 
> If any are not alive, we'll go ahead and terminate any remaining 
> alive subprocesses since this test is going to fail.
> """
> subproc_statuses = [s.is_alive() for s in subprocs]
> if not all(subproc_statuses):
> message = "A subprocess has terminated early. Subprocess 
> statuses: "
> for s in subprocs:
> message += "{name} (is_alive: {aliveness}), 
> ".format(name=s.name, aliveness=s.is_alive())
> message += "attempting to terminate remaining subprocesses now."
> self._terminate_subprocs()
> >   raise RuntimeError(message)
> E   RuntimeError: A subprocess has terminated early. Subprocess 
> statuses: Process-1 (is_alive: True), Process-2 (is_alive: False), attempting 
> to terminate remaining subprocesses now.{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-16763) Create Cassandra documentation content for new website

2021-12-08 Thread Michael Semb Wever (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-16763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455250#comment-17455250
 ] 

Michael Semb Wever commented on CASSANDRA-16763:


comments in pr#1128

> Create Cassandra documentation content for new website
> --
>
> Key: CASSANDRA-16763
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16763
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Website
>Reporter: Anthony Grasso
>Assignee: Michael Semb Wever
>Priority: High
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need create the content (asciidoc) to render the Cassandra documentation 
> using Antora. This work can commence once the following has happened:
>  * Website and documentation proof of concept is done - CASSANDRA-16029
>  * Website design and concept is done - CASSANDRA-16115
>  * Website and document tooling is done - CASSANDRA-16066 
>  * Website UI components are done - CASSANDRA-16762



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16763) Create Cassandra documentation content for new website

2021-12-08 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16763:
---
Status: Changes Suggested  (was: Review In Progress)

> Create Cassandra documentation content for new website
> --
>
> Key: CASSANDRA-16763
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16763
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Website
>Reporter: Anthony Grasso
>Assignee: Michael Semb Wever
>Priority: High
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need create the content (asciidoc) to render the Cassandra documentation 
> using Antora. This work can commence once the following has happened:
>  * Website and documentation proof of concept is done - CASSANDRA-16029
>  * Website design and concept is done - CASSANDRA-16115
>  * Website and document tooling is done - CASSANDRA-16066 
>  * Website UI components are done - CASSANDRA-16762



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-16763) Create Cassandra documentation content for new website

2021-12-08 Thread Michael Semb Wever (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-16763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Semb Wever updated CASSANDRA-16763:
---
Reviewers: Michael Semb Wever
   Status: Review In Progress  (was: Patch Available)

> Create Cassandra documentation content for new website
> --
>
> Key: CASSANDRA-16763
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16763
> Project: Cassandra
>  Issue Type: Task
>  Components: Documentation/Website
>Reporter: Anthony Grasso
>Assignee: Michael Semb Wever
>Priority: High
>
> We need create the content (asciidoc) to render the Cassandra documentation 
> using Antora. This work can commence once the following has happened:
>  * Website and documentation proof of concept is done - CASSANDRA-16029
>  * Website design and concept is done - CASSANDRA-16115
>  * Website and document tooling is done - CASSANDRA-16066 
>  * Website UI components are done - CASSANDRA-16762



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17170) Found Third Party Software vulnerabilities/security issue in 3.11.11

2021-12-08 Thread Brandon Williams (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455232#comment-17455232
 ] 

Brandon Williams commented on CASSANDRA-17170:
--

See the linked ticket for this.

> Found Third Party Software  vulnerabilities/security issue in 3.11.11
> -
>
> Key: CASSANDRA-17170
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17170
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Amol Nawale
>Priority: Normal
>
> we have found below vulnerabilities  in Cassandra 3.11.11.zip during security 
> scan.
> Could you please help us to resolve those?
> | [sonatype-2021-1175] [logback-core] [1.1.3]|
> | [CVE-2021-37137] [netty-all] [4.0.44.Final]|
> | [sonatype-2021-1425] [thrift] [0.9.3]|
> | [sonatype-2020-1031] [netty-all] [4.0.44.Final]|
> | [CVE-2020-13949] [libthrift] [0.9.2]|
> | [sonatype-2020-0029] [netty-all] [4.0.44.Final]|
> | [CVE-2020-7238] [netty-all] [4.0.44.Final]|
> | [CVE-2019-20444] [netty-all] [4.0.44.Final]|
> | [CVE-2019-20445] [netty-all] [4.0.44.Final]|
> | [CVE-2017-18640] [snakeyaml] [1.11]|
> | [CVE-2019-16869] [netty-all] [4.0.44.Final]|
> | [CVE-2019-0205] [thrift] [0.9.3]|
> | [CVE-2018-1320] [libthrift] [0.9.2]|
> | [CVE-2017-5929] [logback-classic] [1.1.3]|
> |*[sonatype-2017-0312] [jackson-databind] [2.9.10.8]*|
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-17044) Refactor schema management to allow for schema source pluggability

2021-12-08 Thread Jacek Lewandowski (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455208#comment-17455208
 ] 

Jacek Lewandowski edited comment on CASSANDRA-17044 at 12/8/21, 12:24 PM:
--

||j11||
|[(!)|https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/149/workflows/e7293b1d-b546-4019-9b35-04e0be49b5be/jobs/828]|

There are two failures, one is flaky BootstrapTest, but the error message is 
related to schema propagation so I'll double check


was (Author: jlewandowski):
||j11||
|[(!)|https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/149/workflows/e7293b1d-b546-4019-9b35-04e0be49b5be/jobs/828]

> Refactor schema management to allow for schema source pluggability
> --
>
> Key: CASSANDRA-17044
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17044
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Schema
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1
>
>
> The idea is decompose `Schema` into separate entities responsible for 
> different things. In particular extract what is related to schema storage and 
> synchronization into a separate class so that it is possible to create an 
> extension point there and store schema in a different way than 
> `system_schema` keyspace, for example in etcd. 
> This would also simplify the logic and reduce the number of special cases, 
> make all the things more testable and the logic of internal classes 
> encapsulated.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-17044) Refactor schema management to allow for schema source pluggability

2021-12-08 Thread Jacek Lewandowski (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17455208#comment-17455208
 ] 

Jacek Lewandowski commented on CASSANDRA-17044:
---

||j11||
|[(!)|https://app.circleci.com/pipelines/github/jacek-lewandowski/cassandra/149/workflows/e7293b1d-b546-4019-9b35-04e0be49b5be/jobs/828]

> Refactor schema management to allow for schema source pluggability
> --
>
> Key: CASSANDRA-17044
> URL: https://issues.apache.org/jira/browse/CASSANDRA-17044
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Cluster/Schema
>Reporter: Jacek Lewandowski
>Assignee: Jacek Lewandowski
>Priority: Normal
> Fix For: 4.1
>
>
> The idea is decompose `Schema` into separate entities responsible for 
> different things. In particular extract what is related to schema storage and 
> synchronization into a separate class so that it is possible to create an 
> extension point there and store schema in a different way than 
> `system_schema` keyspace, for example in etcd. 
> This would also simplify the logic and reduce the number of special cases, 
> make all the things more testable and the logic of internal classes 
> encapsulated.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org