[jira] [Updated] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-14422: --- Fix Version/s: 3.11.3 3.0.17 4.0 > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Fix For: 4.0, 3.0.17, 3.11.3 > > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-14422: --- Resolution: Fixed Status: Resolved (was: Ready to Commit) > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Fix For: 4.0, 3.0.17, 3.11.3 > > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496110#comment-16496110 ] Jay Zhuang commented on CASSANDRA-14422: Thanks [~shichao.an] for the fix. The change is committed as [38096da|https://github.com/apache/cassandra/commit/38096da25bd72346628c001d5b310417f8f703cd]. > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Fix For: 4.0, 3.0.17, 3.11.3 > > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/069e383f Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/069e383f Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/069e383f Branch: refs/heads/trunk Commit: 069e383f57e3106bbe2e6ddcebeae77da1ea53e1 Parents: 7b38b7e b92d90d Author: Jay Zhuang Authored: Wed May 30 22:01:21 2018 -0700 Committer: Jay Zhuang Committed: Wed May 30 22:02:21 2018 -0700 -- CHANGES.txt | 1 + build.xml | 2 ++ 2 files changed, 3 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/069e383f/CHANGES.txt -- diff --cc CHANGES.txt index 111f644,2d4ef25..b94fc62 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -260,7 -16,9 +260,8 @@@ * RateBasedBackPressure unnecessarily invokes a lock on the Guava RateLimiter (CASSANDRA-14163) * Fix wildcard GROUP BY queries (CASSANDRA-14209) Merged from 3.0: + * Add Missing dependencies in pom-all (CASSANDRA-14422) * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447) - * Fix deprecated repair error notifications from 3.x clusters to legacy JMX clients (CASSANDRA-13121) * Cassandra not starting when using enhanced startup scripts in windows (CASSANDRA-14418) * Fix progress stats and units in compactionstats (CASSANDRA-12244) * Better handle missing partition columns in system_schema.columns (CASSANDRA-14379) http://git-wip-us.apache.org/repos/asf/cassandra/blob/069e383f/build.xml -- - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[2/6] cassandra git commit: Add missing dependencies in pom-all
Add missing dependencies in pom-all patch by Shichao An; reviewed by Jay Zhuang for CASSANDRA-14422 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38096da2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38096da2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38096da2 Branch: refs/heads/cassandra-3.11 Commit: 38096da25bd72346628c001d5b310417f8f703cd Parents: 06b3521 Author: Shichao An Authored: Thu Apr 26 17:35:39 2018 -0700 Committer: Jay Zhuang Committed: Wed May 30 21:55:53 2018 -0700 -- CHANGES.txt | 1 + build.xml | 96 +--- 2 files changed, 51 insertions(+), 46 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/38096da2/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1293bd4..16fe6d1 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.17 + * Add Missing dependencies in pom-all (CASSANDRA-14422) * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447) * Fix deprecated repair error notifications from 3.x clusters to legacy JMX clients (CASSANDRA-13121) * Cassandra not starting when using enhanced startup scripts in windows (CASSANDRA-14418) http://git-wip-us.apache.org/repos/asf/cassandra/blob/38096da2/build.xml -- diff --git a/build.xml b/build.xml index 7bab97c..3fc64fb 100644 --- a/build.xml +++ b/build.xml @@ -65,10 +65,10 @@ - + - + @@ -76,7 +76,7 @@ - + @@ -102,7 +102,7 @@ - + @@ -215,7 +215,7 @@ srcfile="${build.src.java}/org/apache/cassandra/cql3/Cql.g" targetfile="${build.src.gen-java}/org/apache/cassandra/cql3/Cql.tokens"/> - + Building Grammar ${build.src.java}/org/apache/cassandra/cql3/Cql.g ... - + @@ -414,6 +414,7 @@ + @@ -552,6 +553,7 @@ + @@ -582,6 +584,8 @@ + + - + @@ -822,7 +826,7 @@ - + @@ -946,7 +950,7 @@ - @@ -1037,7 +1041,7 @@ - @@ -1134,7 +1138,7 @@ debuglevel="${debuglevel}" destdir="${test.classes}" includeantruntime="true" - source="${source.version}" + source="${source.version}" target="${target.version}" encoding="utf-8"> @@ -1273,7 +1277,7 @@ - + @@ -1340,7 +1344,7 @@ - + @@ -1360,12 +1364,12 @@ - + - + @@ -1464,10 +1468,10 @@ - - + @@ -1523,7 +1527,7 @@ - @@ -1791,7 +1795,7 @@ ]]> - + @@ -1805,27 +1809,27 @@ var File = java.io.File; var FilenameUtils = Packages.org.apache.commons.io.FilenameUtils; jars = project.getProperty("eclipse-project-libs").split(project.getProperty("path.separator")); - + cp = ""; for (i=0; i< jars.length; i++) { srcjar = FilenameUtils.getBaseName(jars[i]) + '-sources.jar'; srcdir = FilenameUtils.concat(project.getProperty("build.dir.lib"), 'sources'); srcfile = new File(FilenameUtils.concat(srcdir, srcjar)); - + cp += ' @@ -1845,29 +1849,29 @@ - + - + - - - + + +maxmemory="512m"> - + - + - + - + @@ -1875,13 +1879,13 @@ - + - + - - + + - + - + - - + + - +
[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b92d90dc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b92d90dc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b92d90dc Branch: refs/heads/trunk Commit: b92d90dc14ef978fbfa9e09520a641f6669cf631 Parents: b8cbdde 38096da Author: Jay Zhuang Authored: Wed May 30 21:59:22 2018 -0700 Committer: Jay Zhuang Committed: Wed May 30 22:00:51 2018 -0700 -- CHANGES.txt | 1 + build.xml | 84 +--- 2 files changed, 44 insertions(+), 41 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b92d90dc/CHANGES.txt -- diff --cc CHANGES.txt index 3879a55,16fe6d1..2d4ef25 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,21 -1,5 +1,22 @@@ -3.0.17 +3.11.3 + * Reduce nodetool GC thread count (CASSANDRA-14475) + * Fix New SASI view creation during Index Redistribution (CASSANDRA-14055) + * Remove string formatting lines from BufferPool hot path (CASSANDRA-14416) + * Update metrics to 3.1.5 (CASSANDRA-12924) + * Detect OpenJDK jvm type and architecture (CASSANDRA-12793) + * Don't use guava collections in the non-system keyspace jmx attributes (CASSANDRA-12271) + * Allow existing nodes to use all peers in shadow round (CASSANDRA-13851) + * Fix cqlsh to read connection.ssl cqlshrc option again (CASSANDRA-14299) + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370) + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891) + * Serialize empty buffer as empty string for json output format (CASSANDRA-14245) + * Allow logging implementation to be interchanged for embedded testing (CASSANDRA-13396) + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247) + * Fix Loss of digits when doing CAST from varint/bigint to decimal (CASSANDRA-14170) + * RateBasedBackPressure unnecessarily invokes a lock on the Guava RateLimiter (CASSANDRA-14163) + * Fix wildcard GROUP BY queries (CASSANDRA-14209) +Merged from 3.0: + * Add Missing dependencies in pom-all (CASSANDRA-14422) * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447) * Fix deprecated repair error notifications from 3.x clusters to legacy JMX clients (CASSANDRA-13121) * Cassandra not starting when using enhanced startup scripts in windows (CASSANDRA-14418) http://git-wip-us.apache.org/repos/asf/cassandra/blob/b92d90dc/build.xml -- diff --cc build.xml index f8cdf82,3fc64fb..4edfbb1 --- a/build.xml +++ b/build.xml @@@ -67,11 -66,9 +67,11 @@@ + - ++ - + @@@ -216,15 -212,12 +216,15 @@@ --> + targetfile="${build.src.gen-java}/org/apache/cassandra/cql3/Cql.tokens"> + + + + - + - Building Grammar ${build.src.java}/org/apache/cassandra/cql3/Cql.g ... + Building Grammar ${build.src.antlr}/Cql.g ... - + + @@@ -639,9 -584,8 +640,10 @@@ + - + + + - @@@ -1396,8 -1344,8 +1398,8 @@@ - + - + @@@ -1533,10 -1468,10 +1535,10 @@@ - - + - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[3/6] cassandra git commit: Add missing dependencies in pom-all
Add missing dependencies in pom-all patch by Shichao An; reviewed by Jay Zhuang for CASSANDRA-14422 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38096da2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38096da2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38096da2 Branch: refs/heads/trunk Commit: 38096da25bd72346628c001d5b310417f8f703cd Parents: 06b3521 Author: Shichao An Authored: Thu Apr 26 17:35:39 2018 -0700 Committer: Jay Zhuang Committed: Wed May 30 21:55:53 2018 -0700 -- CHANGES.txt | 1 + build.xml | 96 +--- 2 files changed, 51 insertions(+), 46 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/38096da2/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1293bd4..16fe6d1 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.17 + * Add Missing dependencies in pom-all (CASSANDRA-14422) * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447) * Fix deprecated repair error notifications from 3.x clusters to legacy JMX clients (CASSANDRA-13121) * Cassandra not starting when using enhanced startup scripts in windows (CASSANDRA-14418) http://git-wip-us.apache.org/repos/asf/cassandra/blob/38096da2/build.xml -- diff --git a/build.xml b/build.xml index 7bab97c..3fc64fb 100644 --- a/build.xml +++ b/build.xml @@ -65,10 +65,10 @@ - + - + @@ -76,7 +76,7 @@ - + @@ -102,7 +102,7 @@ - + @@ -215,7 +215,7 @@ srcfile="${build.src.java}/org/apache/cassandra/cql3/Cql.g" targetfile="${build.src.gen-java}/org/apache/cassandra/cql3/Cql.tokens"/> - + Building Grammar ${build.src.java}/org/apache/cassandra/cql3/Cql.g ... - + @@ -414,6 +414,7 @@ + @@ -552,6 +553,7 @@ + @@ -582,6 +584,8 @@ + + - + @@ -822,7 +826,7 @@ - + @@ -946,7 +950,7 @@ - @@ -1037,7 +1041,7 @@ - @@ -1134,7 +1138,7 @@ debuglevel="${debuglevel}" destdir="${test.classes}" includeantruntime="true" - source="${source.version}" + source="${source.version}" target="${target.version}" encoding="utf-8"> @@ -1273,7 +1277,7 @@ - + @@ -1340,7 +1344,7 @@ - + @@ -1360,12 +1364,12 @@ - + - + @@ -1464,10 +1468,10 @@ - - + @@ -1523,7 +1527,7 @@ - @@ -1791,7 +1795,7 @@ ]]> - + @@ -1805,27 +1809,27 @@ var File = java.io.File; var FilenameUtils = Packages.org.apache.commons.io.FilenameUtils; jars = project.getProperty("eclipse-project-libs").split(project.getProperty("path.separator")); - + cp = ""; for (i=0; i< jars.length; i++) { srcjar = FilenameUtils.getBaseName(jars[i]) + '-sources.jar'; srcdir = FilenameUtils.concat(project.getProperty("build.dir.lib"), 'sources'); srcfile = new File(FilenameUtils.concat(srcdir, srcjar)); - + cp += ' @@ -1845,29 +1849,29 @@ - + - + - - - + + +maxmemory="512m"> - + - + - + - + @@ -1875,13 +1879,13 @@ - + - + - - + + - + - + - - + + - +
[1/6] cassandra git commit: Add missing dependencies in pom-all
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 06b3521ac -> 38096da25 refs/heads/cassandra-3.11 b8cbdde2b -> b92d90dc1 refs/heads/trunk 7b38b7e54 -> 069e383f5 Add missing dependencies in pom-all patch by Shichao An; reviewed by Jay Zhuang for CASSANDRA-14422 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/38096da2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/38096da2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/38096da2 Branch: refs/heads/cassandra-3.0 Commit: 38096da25bd72346628c001d5b310417f8f703cd Parents: 06b3521 Author: Shichao An Authored: Thu Apr 26 17:35:39 2018 -0700 Committer: Jay Zhuang Committed: Wed May 30 21:55:53 2018 -0700 -- CHANGES.txt | 1 + build.xml | 96 +--- 2 files changed, 51 insertions(+), 46 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/38096da2/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 1293bd4..16fe6d1 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.17 + * Add Missing dependencies in pom-all (CASSANDRA-14422) * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447) * Fix deprecated repair error notifications from 3.x clusters to legacy JMX clients (CASSANDRA-13121) * Cassandra not starting when using enhanced startup scripts in windows (CASSANDRA-14418) http://git-wip-us.apache.org/repos/asf/cassandra/blob/38096da2/build.xml -- diff --git a/build.xml b/build.xml index 7bab97c..3fc64fb 100644 --- a/build.xml +++ b/build.xml @@ -65,10 +65,10 @@ - + - + @@ -76,7 +76,7 @@ - + @@ -102,7 +102,7 @@ - + @@ -215,7 +215,7 @@ srcfile="${build.src.java}/org/apache/cassandra/cql3/Cql.g" targetfile="${build.src.gen-java}/org/apache/cassandra/cql3/Cql.tokens"/> - + Building Grammar ${build.src.java}/org/apache/cassandra/cql3/Cql.g ... - + @@ -414,6 +414,7 @@ + @@ -552,6 +553,7 @@ + @@ -582,6 +584,8 @@ + + - + @@ -822,7 +826,7 @@ - + @@ -946,7 +950,7 @@ - @@ -1037,7 +1041,7 @@ - @@ -1134,7 +1138,7 @@ debuglevel="${debuglevel}" destdir="${test.classes}" includeantruntime="true" - source="${source.version}" + source="${source.version}" target="${target.version}" encoding="utf-8"> @@ -1273,7 +1277,7 @@ - + @@ -1340,7 +1344,7 @@ - + @@ -1360,12 +1364,12 @@ - + - + @@ -1464,10 +1468,10 @@ - - + @@ -1523,7 +1527,7 @@ - @@ -1791,7 +1795,7 @@ ]]> - + @@ -1805,27 +1809,27 @@ var File = java.io.File; var FilenameUtils = Packages.org.apache.commons.io.FilenameUtils; jars = project.getProperty("eclipse-project-libs").split(project.getProperty("path.separator")); - + cp = ""; for (i=0; i< jars.length; i++) { srcjar = FilenameUtils.getBaseName(jars[i]) + '-sources.jar'; srcdir = FilenameUtils.concat(project.getProperty("build.dir.lib"), 'sources'); srcfile = new File(FilenameUtils.concat(srcdir, srcjar)); - + cp += ' @@ -1845,29 +1849,29 @@ - + - + - - - + + +maxmemory="512m"> - + - + - + - + @@ -1875,13 +1879,13 @@ - + - + - - + + - + - + - - + + - +
[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b92d90dc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b92d90dc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b92d90dc Branch: refs/heads/cassandra-3.11 Commit: b92d90dc14ef978fbfa9e09520a641f6669cf631 Parents: b8cbdde 38096da Author: Jay Zhuang Authored: Wed May 30 21:59:22 2018 -0700 Committer: Jay Zhuang Committed: Wed May 30 22:00:51 2018 -0700 -- CHANGES.txt | 1 + build.xml | 84 +--- 2 files changed, 44 insertions(+), 41 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b92d90dc/CHANGES.txt -- diff --cc CHANGES.txt index 3879a55,16fe6d1..2d4ef25 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,21 -1,5 +1,22 @@@ -3.0.17 +3.11.3 + * Reduce nodetool GC thread count (CASSANDRA-14475) + * Fix New SASI view creation during Index Redistribution (CASSANDRA-14055) + * Remove string formatting lines from BufferPool hot path (CASSANDRA-14416) + * Update metrics to 3.1.5 (CASSANDRA-12924) + * Detect OpenJDK jvm type and architecture (CASSANDRA-12793) + * Don't use guava collections in the non-system keyspace jmx attributes (CASSANDRA-12271) + * Allow existing nodes to use all peers in shadow round (CASSANDRA-13851) + * Fix cqlsh to read connection.ssl cqlshrc option again (CASSANDRA-14299) + * Downgrade log level to trace for CommitLogSegmentManager (CASSANDRA-14370) + * CQL fromJson(null) throws NullPointerException (CASSANDRA-13891) + * Serialize empty buffer as empty string for json output format (CASSANDRA-14245) + * Allow logging implementation to be interchanged for embedded testing (CASSANDRA-13396) + * SASI tokenizer for simple delimiter based entries (CASSANDRA-14247) + * Fix Loss of digits when doing CAST from varint/bigint to decimal (CASSANDRA-14170) + * RateBasedBackPressure unnecessarily invokes a lock on the Guava RateLimiter (CASSANDRA-14163) + * Fix wildcard GROUP BY queries (CASSANDRA-14209) +Merged from 3.0: + * Add Missing dependencies in pom-all (CASSANDRA-14422) * Cleanup StartupClusterConnectivityChecker and PING Verb (CASSANDRA-14447) * Fix deprecated repair error notifications from 3.x clusters to legacy JMX clients (CASSANDRA-13121) * Cassandra not starting when using enhanced startup scripts in windows (CASSANDRA-14418) http://git-wip-us.apache.org/repos/asf/cassandra/blob/b92d90dc/build.xml -- diff --cc build.xml index f8cdf82,3fc64fb..4edfbb1 --- a/build.xml +++ b/build.xml @@@ -67,11 -66,9 +67,11 @@@ + - ++ - + @@@ -216,15 -212,12 +216,15 @@@ --> + targetfile="${build.src.gen-java}/org/apache/cassandra/cql3/Cql.tokens"> + + + + - + - Building Grammar ${build.src.java}/org/apache/cassandra/cql3/Cql.g ... + Building Grammar ${build.src.antlr}/Cql.g ... - + + @@@ -639,9 -584,8 +640,10 @@@ + - + + + - @@@ -1396,8 -1344,8 +1398,8 @@@ - + - + @@@ -1533,10 -1468,10 +1535,10 @@@ - - + - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-10540) RangeAwareCompaction
[ https://issues.apache.org/jira/browse/CASSANDRA-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496077#comment-16496077 ] Lerh Chuan Low edited comment on CASSANDRA-10540 at 5/31/18 4:34 AM: - Hi [~krummas], Sorry for the delay, here are some initial benchmarks. I've only tried it with LCS, this is the Stressspec YAML, a reasonably stressful test: {code:java} keyspace: stresscql2 keyspace_definition: | CREATE KEYSPACE stresscql2 WITH replication = {'class': 'NetworkTopologyStrategy', 'Waboku': 3, 'Bokusapp': 2}; table: typestest table_definition: | CREATE TABLE typestest ( name text, choice boolean, date timestamp, address inet, dbl double, lval bigint, ival int, uid timeuuid, value blob, PRIMARY KEY((name,choice), date, address, dbl, lval, ival, uid) ) WITH compaction = { 'class':'LeveledCompactionStrategy', 'range_aware_compaction':'true', 'min_range_sstable_size_in_mb':'15' } AND comment='A table of many types to test wide rows' columnspec: name: name size: uniform(1..1000) population: uniform(1..500M) # the range of unique values to select for the field (default is 100Billion) name: date cluster: uniform(20..1000) name: lval population: gaussian(1..1000) cluster: uniform(1..4) name: value size: uniform(100..500) insert: partitions: fixed(1) # number of unique partitions to update in a single operation batchtype: UNLOGGED # type of batch to use select: uniform(1..10)/10 # uniform chance any single generated CQL row will be visited in a partition; queries: simple1: cql: select * from typestest where name = ? and choice = ? LIMIT 1 fields: samerow range1: cql: select name, choice, uid from typestest where name = ? and choice = ? and date >= ? LIMIT 10 fields: multirow simple2: cql: select name, choice, uid from typestest where name = ? and choice = ? LIMIT 1 fields: samerow # samerow or multirow (select arguments from the same row, or randomly from all rows in the partition) {code} This is done over a multi DC cluster in EC2, 400GB SSD with 3 nodes in 1 and 2 nodes in the other. Stress replicates to both DCs. For inserts: {code:java} nohup cassandra-stress user no-warmup profile=stressspec.yaml n=15000 cl=QUORUM ops(insert=1) -node file=nodelist.txt -rate threads=100 -log file=insert.log > nohup.txt &{code} We have || ||RACS||NonRACS|| |Stress result|Op rate : 8,784 op/s [insert: 8,784 op/s] Partition rate : 8,784 pk/s [insert: 8,784 pk/s] Row rate : 8,784 row/s [insert: 8,784 row/s] Latency mean : 5.4 ms [insert: 5.4 ms] Latency median : 4.3 ms [insert: 4.3 ms] Latency 95th percentile : 8.4 ms [insert: 8.4 ms] Latency 99th percentile : 39.2 ms [insert: 39.2 ms] Latency 99.9th percentile : 63.3 ms [insert: 63.3 ms] Latency max : 1506.8 ms [insert: 1,506.8 ms] Total partitions : 150,000,000 [insert: 150,000,000] Total errors : 0 [insert: 0] Total GC count : 0 Total GC memory : 0.000 KiB Total GC time : 0.0 seconds Avg GC time : NaN ms StdDev GC time : 0.0 ms Total operation time : 04:44:35|Op rate : 8,730 op/s [insert: 8,730 op/s] Partition rate : 8,730 pk/s [insert: 8,730 pk/s] Row rate : 8,730 row/s [insert: 8,730 row/s] Latency mean : 5.4 ms [insert: 5.4 ms] Latency median : 4.3 ms [insert: 4.3 ms] Latency 95th percentile : 8.5 ms [insert: 8.5 ms] Latency 99th percentile : 39.4 ms [insert: 39.4 ms] Latency 99.9th percentile : 66.1 ms [insert: 66.1 ms] Latency max : 944.8 ms [insert: 944.8 ms] Total partitions : 150,000,000 [insert: 150,000,000] Total errors : 0 [insert: 0] Total GC count : 0 Total GC memory : 0.000 KiB Total GC time : 0.0 seconds Avg GC time : NaN ms StdDev GC time : 0.0 ms Total operation time : 04:46:22| |SSTable count|1339 1259 1342 1285 1333|743 750 747 737 741| For mixed workloads, which is done after the insert so reads are not just read off the OS page cache: {code:java} nohup cassandra-stress user no-warmup profile=stressspec.yaml duration=2h cl=QUORUM ops\(insert=10,simple1=10,range1=1\) -node file=nodelist.txt -rate threads=50 -log file=mixed.log > nohup.txt & {code} || ||RACS||Non RACS|| |Stress result |Op rate : 415 op/s [insert: 197 op/s, range1: 20 op/s, simple1: 198 op/s] Partition rate : 407 pk/s [insert: 197 pk/s, range1: 12 pk/s, simple1: 198 pk/s] Row rate : 412 row/s [insert: 197 row/s, range1: 17 row/s, simple1: 198 row/s] Latency mean : 120.4 ms [insert: 2.3 ms, range1: 227.0 ms, simple1: 227.3 ms] Latency median : 38.0 ms [insert: 2.0 ms, range1: 207.0 ms, simple1: 207.4 ms] Latency 95th percentile : 454.6 ms [insert: 3.1 ms, range1: 541.1 ms, simple1: 543.2 ms] Latency 99th percentile : 673.2 ms [insert: 5.1 ms, range1: 739.2 ms, simple1: 741.3 ms] Latency 99.9th percentile : 918.0 ms [insert: 43.4 ms, range1: 985.1 ms, simple1: 975.2 ms] Latency max : 1584.4 ms [insert: 766.0 ms, range1: 1,426.1 ms, simple1: 1,584.4 ms] Total partitions : 2,930,512 [i
[jira] [Commented] (CASSANDRA-10540) RangeAwareCompaction
[ https://issues.apache.org/jira/browse/CASSANDRA-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496077#comment-16496077 ] Lerh Chuan Low commented on CASSANDRA-10540: Hi [~krummas], Sorry for the delay, here are some initial benchmarks. I've only tried it with LCS, this is the Stressspec YAML, a reasonably stressful test: ``` keyspace: stresscql2 keyspace_definition: | CREATE KEYSPACE stresscql2 WITH replication = \{'class': 'NetworkTopologyStrategy', 'Waboku': 3, 'Bokusapp': 2}; table: typestest table_definition: | CREATE TABLE typestest ( name text, choice boolean, date timestamp, address inet, dbl double, lval bigint, ival int, uid timeuuid, value blob, PRIMARY KEY((name,choice), date, address, dbl, lval, ival, uid) ) WITH compaction = \{ 'class':'LeveledCompactionStrategy', 'range_aware_compaction':'true', 'min_range_sstable_size_in_mb':'15' } AND comment='A table of many types to test wide rows' columnspec: - name: name size: uniform(1..1000) population: uniform(1..500M) # the range of unique values to select for the field (default is 100Billion) - name: date cluster: uniform(20..1000) - name: lval population: gaussian(1..1000) cluster: uniform(1..4) - name: value size: uniform(100..500) insert: partitions: fixed(1) # number of unique partitions to update in a single operation batchtype: UNLOGGED # type of batch to use select: uniform(1..10)/10 # uniform chance any single generated CQL row will be visited in a partition; queries: simple1: cql: select * from typestest where name = ? and choice = ? LIMIT 1 fields: samerow range1: cql: select name, choice, uid from typestest where name = ? and choice = ? and date >= ? LIMIT 10 fields: multirow simple2: cql: select name, choice, uid from typestest where name = ? and choice = ? LIMIT 1 fields: samerow # samerow or multirow (select arguments from the same row, or randomly from all rows in the partition) ``` This is done over a multi DC cluster in EC2, 400GB SSD with 3 nodes in 1 and 2 nodes in the other. Stress replicates to both DCs. For inserts: ``` nohup cassandra-stress user no-warmup profile=stressspec.yaml n=15000 cl=QUORUM ops\(insert=1\) -node file=nodelist.txt -rate threads=100 -log file=insert.log > nohup.txt & ``` We have || ||RACS||NonRACS|| |Stress result|Op rate : 8,784 op/s [insert: 8,784 op/s] Partition rate : 8,784 pk/s [insert: 8,784 pk/s] Row rate : 8,784 row/s [insert: 8,784 row/s] Latency mean : 5.4 ms [insert: 5.4 ms] Latency median : 4.3 ms [insert: 4.3 ms] Latency 95th percentile : 8.4 ms [insert: 8.4 ms] Latency 99th percentile : 39.2 ms [insert: 39.2 ms] Latency 99.9th percentile : 63.3 ms [insert: 63.3 ms] Latency max : 1506.8 ms [insert: 1,506.8 ms] Total partitions : 150,000,000 [insert: 150,000,000] Total errors : 0 [insert: 0] Total GC count : 0 Total GC memory : 0.000 KiB Total GC time : 0.0 seconds Avg GC time : NaN ms StdDev GC time : 0.0 ms Total operation time : 04:44:35|Op rate : 8,730 op/s [insert: 8,730 op/s] Partition rate : 8,730 pk/s [insert: 8,730 pk/s] Row rate : 8,730 row/s [insert: 8,730 row/s] Latency mean : 5.4 ms [insert: 5.4 ms] Latency median : 4.3 ms [insert: 4.3 ms] Latency 95th percentile : 8.5 ms [insert: 8.5 ms] Latency 99th percentile : 39.4 ms [insert: 39.4 ms] Latency 99.9th percentile : 66.1 ms [insert: 66.1 ms] Latency max : 944.8 ms [insert: 944.8 ms] Total partitions : 150,000,000 [insert: 150,000,000] Total errors : 0 [insert: 0] Total GC count : 0 Total GC memory : 0.000 KiB Total GC time : 0.0 seconds Avg GC time : NaN ms StdDev GC time : 0.0 ms Total operation time : 04:46:22| |SSTable count|1339 1259 1342 1285 1333|743 750 747 737 741| For mixed workloads, which is done after the insert so reads are not just read off the OS page cache: || ||RACS||Non RACS|| |Stress result |Op rate : 415 op/s [insert: 197 op/s, range1: 20 op/s, simple1: 198 op/s] Partition rate : 407 pk/s [insert: 197 pk/s, range1: 12 pk/s, simple1: 198 pk/s] Row rate : 412 row/s [insert: 197 row/s, range1: 17 row/s, simple1: 198 row/s] Latency mean : 120.4 ms [insert: 2.3 ms, range1: 227.0 ms, simple1: 227.3 ms] Latency median : 38.0 ms [insert: 2.0 ms, range1: 207.0 ms, simple1: 207.4 ms] Latency 95th percentile : 454.6 ms [insert: 3.1 ms, range1: 541.1 ms, simple1: 543.2 ms] Latency 99th percentile : 673.2 ms [insert: 5.1 ms, range1: 739.2 ms, simple1: 741.3 ms] Latency 99.9th percentile : 918.0 ms [insert: 43.4 ms, range1: 985.1 ms, simple1: 975.2 ms] Latency max : 1584.4 ms [insert: 766.0 ms, range1: 1,426.1 ms, simple1: 1,584.4 ms] Total partitions : 2,930,512 [insert: 1,419,222, range1: 86,021, simple1: 1,425,269] Total errors : 0 [insert: 0, range1: 0, simple1: 0] Total GC count : 0 Total GC memory : 0.000 KiB Total GC time : 0.0 seconds Avg GC time : NaN ms StdDev GC time : 0.0 ms Total operation time : 02:00:01|Op rate : 382 op/
[jira] [Created] (CASSANDRA-14483) Bootstrap stream fails with Configuration exception merging remote schema
Yongxin Cen created CASSANDRA-14483: --- Summary: Bootstrap stream fails with Configuration exception merging remote schema Key: CASSANDRA-14483 URL: https://issues.apache.org/jira/browse/CASSANDRA-14483 Project: Cassandra Issue Type: Bug Components: Configuration Reporter: Yongxin Cen Fix For: 3.11.2 I configured yaml file for a seed node, and start it up, cqlsh into it, and create keyspace kong with replication = \{'class':'SimpleStrategy','replication_factor':2}; create user kong with password 'xxx'; And create tables in keyspace kong. Then, in another Cassandra node, point to the seed, and start Cassandra service in the new node Run command "nodetool status kong" shows the new node Owns ?, seed owns 100%. Run command "nodetool bootstrap resume", Resuming bootstrap [2018-05-31 04:15:57,807] prepare with IP_Seed complete (progress: 0%) [2018-05-31 04:15:57,921] received file system_auth/roles (progress: 50%) [2018-05-31 04:15:57,960] session with IP_Seed complete (progress: 50%) [2018-05-31 04:15:57,965] Stream failed [2018-05-31 04:15:57,966] Error during bootstrap: Stream failed [2018-05-31 04:15:57,966] Resume bootstrap complete At the end of /var/log/cassandra/cassandra.log, there are errors: ERROR [InternalResponseStage:2] 2018-05-31 00:02:30,559 MigrationTask.java:95 - Configuration exception merging remote schema org.apache.cassandra.exceptions.ConfigurationException: Column family ID mismatch (found cce68250-63d6-11e8-b887-09f7d93c2253; expected 41679dd0-2804-11e8-a8d4-cd6631f48e81) at org.apache.cassandra.config.CFMetaData.validateCompatibility(CFMetaData.java:941) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.config.CFMetaData.apply(CFMetaData.java:895) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.config.Schema.updateTable(Schema.java:687) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.schema.SchemaKeyspace.updateKeyspace(SchemaKeyspace.java:1464) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1420) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.schema.SchemaKeyspace.mergeSchema(SchemaKeyspace.java:1389) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.schema.SchemaKeyspace.mergeSchemaAndAnnounceVersion(SchemaKeyspace.java:1366) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.service.MigrationTask$1.response(MigrationTask.java:91) ~[apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53) [apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:66) [apache-cassandra-3.11.2.jar:3.11.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_161] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_161] at org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$0(NamedThreadFactory.java:81) [apache-cassandra-3.11.2.jar:3.11.2] at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_161] ERROR [main] 2018-05-31 00:02:58,417 StorageService.java:1524 - Error while waiting on bootstrap to complete. Bootstrap will have to be restarted. java.util.concurrent.ExecutionException: org.apache.cassandra.streaming.StreamException: Stream failed at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-18.0.jar:na] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-18.0.jar:na] at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1519) [apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:977) [apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:682) [apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:613) [apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:379) [apache-cassandra-3.11.2.jar:3.11.2] at org.apache.cassandra.service.CassandraDaemon.activate(Cassan
[jira] [Commented] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495796#comment-16495796 ] Shichao An commented on CASSANDRA-14422: You can ignore whitespace GitHub by adding a ?w=1 in the URL, for example: [https://github.com/shichao-an/cassandra/commit/c1962e32e0a3bf1dde8973855f108ec1a4aeb5d6?w=1] > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495745#comment-16495745 ] Michael Shuler commented on CASSANDRA-14422: Thanks for the info and no-whitespace diff. Looks good to me! > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
cassandra git commit: Update PyPi URL in doc/README.md and fix link
Repository: cassandra Updated Branches: refs/heads/trunk c5285d21c -> 7b38b7e54 Update PyPi URL in doc/README.md and fix link Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7b38b7e5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7b38b7e5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7b38b7e5 Branch: refs/heads/trunk Commit: 7b38b7e54d23af3a7ea0af4f870e7e05f2e52824 Parents: c5285d2 Author: Michael Shuler Authored: Wed May 30 16:02:47 2018 -0500 Committer: Michael Shuler Committed: Wed May 30 16:02:47 2018 -0500 -- doc/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7b38b7e5/doc/README.md -- diff --git a/doc/README.md b/doc/README.md index 4d7dd05..eeb5a1c 100644 --- a/doc/README.md +++ b/doc/README.md @@ -20,7 +20,7 @@ the `source` subdirectory. The documentation uses [sphinx](http://www.sphinx-doc and is thus written in [reStructuredText](http://docutils.sourceforge.net/rst.html). To build the HTML documentation, you will need to first install sphinx and the -[sphinx ReadTheDocs theme](the https://pypi.python.org/pypi/sphinx_rtd_theme). +[sphinx ReadTheDocs theme](https://pypi.org/project/sphinx_rtd_theme/). When using Python 3.6 on Windows, use `py -m pip install sphinx sphinx_rtd_theme`, on unix use: ``` - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495651#comment-16495651 ] Jay Zhuang commented on CASSANDRA-14422: Thanks [~spo...@gmail.com]. That's exactly the problem that transitive dependencies are not set correctly. As we published the JAR, we should set that right. I'd like to commit the change later today if there's no objection. > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-14422: --- Status: Ready to Commit (was: Patch Available) > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-14422: --- Reviewer: Jay Zhuang > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495476#comment-16495476 ] Ariel Weisberg commented on CASSANDRA-14467: Created a pull request with review comments https://github.com/apache/cassandra/pull/228 Also can you run the tests again there were some failures. It's been fairly green for me lately on trunk. > Add option to sanity check tombstones on reads/compaction > - > > Key: CASSANDRA-14467 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14467 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Minor > Fix For: 4.x > > > We should add an option to do a quick sanity check of tombstones on reads + > compaction. It should either log the error or throw an exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14467) Add option to sanity check tombstones on reads/compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-14467: --- Reviewer: Ariel Weisberg > Add option to sanity check tombstones on reads/compaction > - > > Key: CASSANDRA-14467 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14467 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Minor > Fix For: 4.x > > > We should add an option to do a quick sanity check of tombstones on reads + > compaction. It should either log the error or throw an exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14482) ZSTD Compressor support in Cassandra
Sushma A Devendrappa created CASSANDRA-14482: Summary: ZSTD Compressor support in Cassandra Key: CASSANDRA-14482 URL: https://issues.apache.org/jira/browse/CASSANDRA-14482 Project: Cassandra Issue Type: Wish Components: Libraries Reporter: Sushma A Devendrappa Fix For: 3.11.x ZStandard has a great speed and compression ratio tradeoff. ZStandard is open source compression from Facebook. More about ZSTD [https://github.com/facebook/zstd] https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/ -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14442) Let nodetool import take a list of directories
[ https://issues.apache.org/jira/browse/CASSANDRA-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495296#comment-16495296 ] Marcus Eriksson commented on CASSANDRA-14442: - just pushed a new commit which removes the jbod-counting because it was kind of broken - the sstable would get moved to the best directory, but then added to the wrong compaction strategy. It would work, but it is hard to reason about (was rebasing CASSANDRA-13425 when I noticed) the best solution to this is probably CASSANDRA-14327 > Let nodetool import take a list of directories > -- > > Key: CASSANDRA-14442 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14442 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > Fix For: 4.x > > > It should be possible to load sstables from several input directories when > running nodetool import. Directories that failed to import should be output. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-14480) Digest mismatch requires all replicas to be responsive
[ https://issues.apache.org/jira/browse/CASSANDRA-14480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495254#comment-16495254 ] Christian Spriegel edited comment on CASSANDRA-14480 at 5/30/18 2:55 PM: - I did some more testing and tried the following change in StorageProxy.SinglePartitionReadLifecycle.awaitResultsAndRetryOnDigestMismatch(): {code:java} repairHandler = new ReadCallback(resolver, ConsistencyLevel.ALL, consistency.blockFor(keyspace), // was: executor.getContactedReplicas().size() command, keyspace, executor.handler.endpoints);{code} This fixed the issue in my test-scenario. But it causes the read-repair to only repair to only repair 2 our of my 3 replicas, in cases where all 3 replicas would be available. I could imagine an alternative solution where maybeAwaitFullDataRead() would wait for 3 replicas, but in case of an RTE it could check if 2 responded and treat that as a successful read. was (Author: christianmovi): I did some more testing and tried the following change in StorageProxy.SinglePartitionReadLifecycle.awaitResultsAndRetryOnDigestMismatch(): {code:java} repairHandler = new ReadCallback(resolver, ConsistencyLevel.ALL, consistency.blockFor(keyspace), // was: executor.getContactedReplicas().size() command, keyspace, executor.handler.endpoints);{code} This fixed the issue in my test-scenario. But it causes the read-repair to only repair to only repair 2 our of my 3 replicas, in cases where all 3 replicas would be available. > Digest mismatch requires all replicas to be responsive > -- > > Key: CASSANDRA-14480 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14480 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Christian Spriegel >Priority: Major > Attachments: Reader.java, Writer.java, schema_14480.cql > > > I ran across a scenario where a digest mismatch causes a read-repair that > requires all up nodes to be able to respond. If one of these nodes is not > responding, then the read-repair is being reported to the client as > ReadTimeoutException. > > My expection would be that a CL=QUORUM will always succeed as long as 2 nodes > are responding. But unfortunetaly the third node being "up" in the ring, but > not being able to respond does lead to a RTE. > > > I came up with a scenario that reproduces the issue: > # set up a 3 node cluster using ccm > # increase the phi_convict_threshold to 16, so that nodes are permanently > reported as up > # create attached schema > # run attached reader&writer (which only connects to node1&2). This should > already produce digest mismatches > # do a "ccm node3 pause" > # The reader will report a read-timeout with consistency QUORUM (2 responses > were required but only 1 replica responded). Within the > DigestMismatchException catch-block it can be seen that the repairHandler is > waiting for 3 responses, even though the exception says that 2 responses are > required. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14481) Using nodetool status after enabling Cassandra internal auth for JMX access fails with currently documented permissions
[ https://issues.apache.org/jira/browse/CASSANDRA-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495270#comment-16495270 ] Valerie Parham-Thompson commented on CASSANDRA-14481: - This is my first "PR" to this project. I read the contributions document, but please let me know if I've missed any required tags or otherwise need to edit my submission. Thank you very much. > Using nodetool status after enabling Cassandra internal auth for JMX access > fails with currently documented permissions > --- > > Key: CASSANDRA-14481 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14481 > Project: Cassandra > Issue Type: Bug > Components: Documentation and Website > Environment: Apache Cassandra 3.11.2 > Centos 6.9 >Reporter: Valerie Parham-Thompson >Priority: Minor > > Using the documentation here: > [https://cassandra.apache.org/doc/latest/operating/security.html#cassandra-integrated-auth] > Running `nodetool status` on a cluster fails as follows: > {noformat} > error: Access Denied > -- StackTrace -- > java.lang.SecurityException: Access Denied > at > org.apache.cassandra.auth.jmx.AuthorizationProxy.invoke(AuthorizationProxy.java:172) > at com.sun.proxy.$Proxy4.invoke(Unknown Source) > at > javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468) > at > javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76) > at > javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309) > at java.security.AccessController.doPrivileged(Native Method) > at > javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1408) > at > javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829) > at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357) > at sun.rmi.transport.Transport$1.run(Transport.java:200) > at sun.rmi.transport.Transport$1.run(Transport.java:197) > at java.security.AccessController.doPrivileged(Native Method) > at sun.rmi.transport.Transport.serviceCall(Transport.java:196) > at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573) > at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:835) > at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688) > at java.security.AccessController.doPrivileged(Native Method) > at > sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > at > sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283) > at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260) > at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161) > at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source) > at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown Source) > at > javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:1020) > at > javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:298) > at com.sun.proxy.$Proxy7.effectiveOwnership(Unknown Source) > at org.apache.cassandra.tools.NodeProbe.effectiveOwnership(NodeProbe.java:489) > at org.apache.cassandra.tools.nodetool.Status.execute(Status.java:74) > at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:255) > at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:169) {noformat} > Permissions on two additional mbeans were required: > {noformat} > GRANT SELECT, EXECUTE ON MBEAN ‘org.apache.cassandra.db:type=StorageService’ > TO jmx; > GRANT EXECUTE ON MBEAN ‘org.apache.cassandra.db:type=EndpointSnitchInfo’ TO > jmx; > {noformat} > I've updated the documentation in my fork here and would like to do a pull > request for the addition: > [https://github.com/dataindataout/cassandra/blob/trunk/doc/source/operating/security.rst#cassandra-integrated-auth] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14481) Using nodetool status after enabling Cassandra internal auth for JMX access fails with currently documented permissions
Valerie Parham-Thompson created CASSANDRA-14481: --- Summary: Using nodetool status after enabling Cassandra internal auth for JMX access fails with currently documented permissions Key: CASSANDRA-14481 URL: https://issues.apache.org/jira/browse/CASSANDRA-14481 Project: Cassandra Issue Type: Bug Components: Documentation and Website Environment: Apache Cassandra 3.11.2 Centos 6.9 Reporter: Valerie Parham-Thompson Using the documentation here: [https://cassandra.apache.org/doc/latest/operating/security.html#cassandra-integrated-auth] Running `nodetool status` on a cluster fails as follows: {noformat} error: Access Denied -- StackTrace -- java.lang.SecurityException: Access Denied at org.apache.cassandra.auth.jmx.AuthorizationProxy.invoke(AuthorizationProxy.java:172) at com.sun.proxy.$Proxy4.invoke(Unknown Source) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1468) at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1309) at java.security.AccessController.doPrivileged(Native Method) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1408) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:829) at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:357) at sun.rmi.transport.Transport$1.run(Transport.java:200) at sun.rmi.transport.Transport$1.run(Transport.java:197) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:196) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:573) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:835) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:688) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:687) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at sun.rmi.transport.StreamRemoteCall.exceptionReceivedFromServer(StreamRemoteCall.java:283) at sun.rmi.transport.StreamRemoteCall.executeCall(StreamRemoteCall.java:260) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:161) at com.sun.jmx.remote.internal.PRef.invoke(Unknown Source) at javax.management.remote.rmi.RMIConnectionImpl_Stub.invoke(Unknown Source) at javax.management.remote.rmi.RMIConnector$RemoteMBeanServerConnection.invoke(RMIConnector.java:1020) at javax.management.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:298) at com.sun.proxy.$Proxy7.effectiveOwnership(Unknown Source) at org.apache.cassandra.tools.NodeProbe.effectiveOwnership(NodeProbe.java:489) at org.apache.cassandra.tools.nodetool.Status.execute(Status.java:74) at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:255) at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:169) {noformat} Permissions on two additional mbeans were required: {noformat} GRANT SELECT, EXECUTE ON MBEAN ‘org.apache.cassandra.db:type=StorageService’ TO jmx; GRANT EXECUTE ON MBEAN ‘org.apache.cassandra.db:type=EndpointSnitchInfo’ TO jmx; {noformat} I've updated the documentation in my fork here and would like to do a pull request for the addition: [https://github.com/dataindataout/cassandra/blob/trunk/doc/source/operating/security.rst#cassandra-integrated-auth] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14480) Digest mismatch requires all replicas to be responsive
[ https://issues.apache.org/jira/browse/CASSANDRA-14480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495254#comment-16495254 ] Christian Spriegel commented on CASSANDRA-14480: I did some more testing and tried the following change in StorageProxy.SinglePartitionReadLifecycle.awaitResultsAndRetryOnDigestMismatch(): {code:java} repairHandler = new ReadCallback(resolver, ConsistencyLevel.ALL, consistency.blockFor(keyspace), // was: executor.getContactedReplicas().size() command, keyspace, executor.handler.endpoints);{code} This fixed the issue in my test-scenario. But it causes the read-repair to only repair to only repair 2 our of my 3 replicas, in cases where all 3 replicas would be available. > Digest mismatch requires all replicas to be responsive > -- > > Key: CASSANDRA-14480 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14480 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Christian Spriegel >Priority: Major > Attachments: Reader.java, Writer.java, schema_14480.cql > > > I ran across a scenario where a digest mismatch causes a read-repair that > requires all up nodes to be able to respond. If one of these nodes is not > responding, then the read-repair is being reported to the client as > ReadTimeoutException. > > My expection would be that a CL=QUORUM will always succeed as long as 2 nodes > are responding. But unfortunetaly the third node being "up" in the ring, but > not being able to respond does lead to a RTE. > > > I came up with a scenario that reproduces the issue: > # set up a 3 node cluster using ccm > # increase the phi_convict_threshold to 16, so that nodes are permanently > reported as up > # create attached schema > # run attached reader&writer (which only connects to node1&2). This should > already produce digest mismatches > # do a "ccm node3 pause" > # The reader will report a read-timeout with consistency QUORUM (2 responses > were required but only 1 replica responded). Within the > DigestMismatchException catch-block it can be seen that the repairHandler is > waiting for 3 responses, even though the exception says that 2 responses are > required. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Resolved] (CASSANDRA-13145) Include documentation in metric registration
[ https://issues.apache.org/jira/browse/CASSANDRA-13145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Lohfink resolved CASSANDRA-13145. --- Resolution: Won't Fix Metrics documented well with the markdown and apache.org doc generation. > Include documentation in metric registration > > > Key: CASSANDRA-13145 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13145 > Project: Cassandra > Issue Type: Improvement >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Major > > If we include the description of the metrics in the declaration in code we > can expose it in JMX mbeans (and other reporters) which will greatly increase > accuracy in operational tooling. The metrics can sometimes be a little vague > and are often misunderstood. > Metric descriptions are currently hand kept between different definitions and > versions (ie apache docs, graphite reporter definitions, agent configs). They > quickly get stale and can be described incorrectly. > Id like to purpose a patch that does the initial work of porting all the > descriptions into the declaration so that going forward to register a metric > developers must define a more friendly description. Future work may be > automatic generation of apache doc from the descriptions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14480) Digest mismatch requires all replicas to be responsive
[ https://issues.apache.org/jira/browse/CASSANDRA-14480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christian Spriegel updated CASSANDRA-14480: --- Attachment: Reader.java Writer.java > Digest mismatch requires all replicas to be responsive > -- > > Key: CASSANDRA-14480 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14480 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Christian Spriegel >Priority: Major > Attachments: Reader.java, Writer.java, schema_14480.cql > > > I ran across a scenario where a digest mismatch causes a read-repair that > requires all up nodes to be able to respond. If one of these nodes is not > responding, then the read-repair is being reported to the client as > ReadTimeoutException. > > My expection would be that a CL=QUORUM will always succeed as long as 2 nodes > are responding. But unfortunetaly the third node being "up" in the ring, but > not being able to respond does lead to a RTE. > > > I came up with a scenario that reproduces the issue: > # set up a 3 node cluster using ccm > # increase the phi_convict_threshold to 16, so that nodes are permanently > reported as up > # create attached schema > # run attached reader&writer (which only connects to node1&2). This should > already produce digest mismatches > # do a "ccm node3 pause" > # The reader will report a read-timeout with consistency QUORUM (2 responses > were required but only 1 replica responded). Within the > DigestMismatchException catch-block it can be seen that the repairHandler is > waiting for 3 responses, even though the exception says that 2 responses are > required. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14480) Digest mismatch requires all replicas to be responsive
[ https://issues.apache.org/jira/browse/CASSANDRA-14480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christian Spriegel updated CASSANDRA-14480: --- Attachment: schema_14480.cql > Digest mismatch requires all replicas to be responsive > -- > > Key: CASSANDRA-14480 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14480 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Christian Spriegel >Priority: Major > Attachments: schema_14480.cql > > > I ran across a scenario where a digest mismatch causes a read-repair that > requires all up nodes to be able to respond. If one of these nodes is not > responding, then the read-repair is being reported to the client as > ReadTimeoutException. > > My expection would be that a CL=QUORUM will always succeed as long as 2 nodes > are responding. But unfortunetaly the third node being "up" in the ring, but > not being able to respond does lead to a RTE. > > > I came up with a scenario that reproduces the issue: > # set up a 3 node cluster using ccm > # increase the phi_convict_threshold to 16, so that nodes are permanently > reported as up > # create attached schema > # run attached reader&writer (which only connects to node1&2). This should > already produce digest mismatches > # do a "ccm node3 pause" > # The reader will report a read-timeout with consistency QUORUM (2 responses > were required but only 1 replica responded). Within the > DigestMismatchException catch-block it can be seen that the repairHandler is > waiting for 3 responses, even though the exception says that 2 responses are > required. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14480) Digest mismatch requires all replicas to be responsive
Christian Spriegel created CASSANDRA-14480: -- Summary: Digest mismatch requires all replicas to be responsive Key: CASSANDRA-14480 URL: https://issues.apache.org/jira/browse/CASSANDRA-14480 Project: Cassandra Issue Type: Bug Components: Core Reporter: Christian Spriegel I ran across a scenario where a digest mismatch causes a read-repair that requires all up nodes to be able to respond. If one of these nodes is not responding, then the read-repair is being reported to the client as ReadTimeoutException. My expection would be that a CL=QUORUM will always succeed as long as 2 nodes are responding. But unfortunetaly the third node being "up" in the ring, but not being able to respond does lead to a RTE. I came up with a scenario that reproduces the issue: # set up a 3 node cluster using ccm # increase the phi_convict_threshold to 16, so that nodes are permanently reported as up # create attached schema # run attached reader&writer (which only connects to node1&2). This should already produce digest mismatches # do a "ccm node3 pause" # The reader will report a read-timeout with consistency QUORUM (2 responses were required but only 1 replica responded). Within the DigestMismatchException catch-block it can be seen that the repairHandler is waiting for 3 responses, even though the exception says that 2 responses are required. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-12526) For LCS, single SSTable up-level is handled inefficiently
[ https://issues.apache.org/jira/browse/CASSANDRA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495120#comment-16495120 ] Marcus Eriksson commented on CASSANDRA-12526: - Thanks for the comments, just pushed a new commit addressing the issues, except for: bq. In metadataChanged and SSTableMetadataChanged notification it might make sense to add new metadata along with the old metadata. opted not to do this since we can get the new metadata with sstable.getSSTableMetadata() bq. there are shortcuts for getDefaultCFS().disableAutoCompaction() and getDefaultCFS().forceBlockingFlush() in CQLTester. There are multiple places, for example here we could use those. and left this as-is since we reuse the cfs variable in the tests it also adds a test to make sure that we can mutate the level on old-format sstables > For LCS, single SSTable up-level is handled inefficiently > - > > Key: CASSANDRA-12526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12526 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Wei Deng >Assignee: Marcus Eriksson >Priority: Major > Labels: compaction, lcs, performance > Fix For: 4.x > > > I'm using the latest trunk (as of August 2016, which probably is going to be > 3.10) to run some experiments on LeveledCompactionStrategy and noticed this > inefficiency. > The test data is generated using cassandra-stress default parameters > (keyspace1.standard1), so as you can imagine, it consists of a ton of newly > inserted partitions that will never merge in compactions, which is probably > the worst kind of workload for LCS (however, I'll detail later why this > scenario should not be ignored as a corner case; for now, let's just assume > we still want to handle this scenario efficiently). > After the compaction test is done, I scrubbed debug.log for patterns that > match the "Compacted" summary so that I can see how long each individual > compaction took and how many bytes they processed. The search pattern is like > the following: > {noformat} > grep 'Compacted.*standard1' debug.log > {noformat} > Interestingly, I noticed a lot of the finished compactions are marked as > having *only one* SSTable involved. With the workload mentioned above, the > "single SSTable" compactions actually consist of the majority of all > compactions (as shown below), so its efficiency can affect the overall > compaction throughput quite a bit. > {noformat} > automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' > debug.log-test1 | wc -l > 243 > automaton@0ce59d338-1:~/cassandra-trunk/logs$ grep 'Compacted.*standard1' > debug.log-test1 | grep ") 1 sstable" | wc -l > 218 > {noformat} > By looking at the code, it appears that there's a way to directly edit the > level of a particular SSTable like the following: > {code} > sstable.descriptor.getMetadataSerializer().mutateLevel(sstable.descriptor, > targetLevel); > sstable.reloadSSTableMetadata(); > {code} > To be exact, I summed up the time spent for these single-SSTable compactions > (the total data size is 60GB) and found that if each compaction only needs to > spend 100ms for only the metadata change (instead of the 10+ second they're > doing now), it can already achieve 22.75% saving on total compaction time. > Compared to what we have now (reading the whole single-SSTable from old level > and writing out the same single-SSTable at the new level), the only > difference I could think of by using this approach is that the new SSTable > will have the same file name (sequence number) as the old one's, which could > break some assumptions on some other part of the code. However, not having to > go through the full read/write IO, and not having to bear the overhead of > cleaning up the old file, creating the new file, creating more churns in heap > and file buffer, it seems the benefits outweigh the inconvenience. So I'd > argue this JIRA belongs to LHF and should be made available in 3.0.x as well. > As mentioned in the 2nd paragraph, I'm also going to address why this kind of > all-new-partition workload should not be ignored as a corner case. Basically, > for the main use case of LCS where you need to frequently merge partitions to > optimize read and eliminate tombstones and expired data sooner, LCS can be > perfectly happy and efficiently perform the partition merge and tombstone > elimination for a long time. However, as soon as the node becomes a bit > unhealthy for various reasons (could be a bad disk so it's missing a whole > bunch of mutations and need repair, could be the user chooses to ingest way > more data than it usually takes and exceeds its capability, or god-forbidden, > some
[jira] [Commented] (CASSANDRA-14464) stop-server.bat -p ../pid.txt -f command not working on windows 2016
[ https://issues.apache.org/jira/browse/CASSANDRA-14464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16495105#comment-16495105 ] Shyam Phirke commented on CASSANDRA-14464: -- Anybody looking into this issue? > stop-server.bat -p ../pid.txt -f command not working on windows 2016 > > > Key: CASSANDRA-14464 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14464 > Project: Cassandra > Issue Type: Bug >Reporter: Shyam Phirke >Priority: Critical > > Steps to reproduce: > 1. Copy and extract cassandra binaries on windows 2016 machine > 2. Start cassandra in non-legacy mode > 3. Check pid of cassandra in task manager and compare it with in pid.txt > 4. Now stop cassandra using command stop-server.bat -p ../pid.txt -f > Expected: > After executing \bin:\> stop-server.bat -p > ../pid.txt -f > cassandra process as in pid.txt should get killed. > > Actual: > After executing above stop command, the cassandra process as in pid.txt gets > killed but a new process gets created with new pid. Also the pid.txt not > updated with new pid. > This new process should not get created. > > Please comment on this issue if more details required. > I am using cassandra 3.11.2. > > This issue impacting me much since because of this new process getting > created my application uninstallation getting impacted. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14479) Secondary Indexes Can "Leak" Records If Insert/Partition Delete Occur Between Flushes
Jordan West created CASSANDRA-14479: --- Summary: Secondary Indexes Can "Leak" Records If Insert/Partition Delete Occur Between Flushes Key: CASSANDRA-14479 URL: https://issues.apache.org/jira/browse/CASSANDRA-14479 Project: Cassandra Issue Type: Bug Components: Secondary Indexes Reporter: Jordan West Attachments: 2i-leak-test.patch When an insert of an indexed column is followed rapidly (within the same memtable) by a delete of an entire partition, the index table for the column will continue to store the record for the inserted value and no tombstone will ever be written. This occurs because the index isn't updated after the delete but before the flush. The value is lost after flush, so subsequent compactions can't issue a delete for the primary key in the index column. The attached test reproduces the described issue. The test fails to assert that the index cfs is empty. The subsequent assertion that there are no live sstables would also fail. Looking on disk with sstabledump after running this test shows the value remaining. Originally reported on the mailing list by Roman Bielik: Create table with LeveledCompactionStrategy; 'tombstone_compaction_interval': 60; gc_grace_seconds=60 There are two indexed columns for comparison: column1, column2 Insert keys \{1..x} with random values in column1 & column2 Delete \{key:column2} (but not column1) Delete \{key} Repeat n-times from the inserts Wait 1 minute nodetool flush nodetool compact (sometimes compact nodetool cfstats What I observe is, that the data table is empty, column2 index table is also empty and column1 index table has non-zero (leaked) "space used" and "estimated rows". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16494904#comment-16494904 ] Stefan Podkowinski commented on CASSANDRA-14422: The dependencies are listed in the parent pom, but not as direct dependencies in cassandra-all. As a result, both artifacts won't become a transitive dependencies by projects depending on cassandra-all and must get explicitly pulled in by such projects. See {{deps-tree-311-no_patch.txt}} on how the current dependency tree for 3.11 cassandra-all pom looks like (you won't find them there). Now the question is, will any downstream projects actually need these dependencies. But we should probably just be consistent by adding all known runtime dependencies to cassandra-all, whether used by nodetool or any other place. > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14422) Missing dependencies airline and ohc-core-j8 for pom-all
[ https://issues.apache.org/jira/browse/CASSANDRA-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-14422: --- Attachment: deps-tree-311-no_patch.txt > Missing dependencies airline and ohc-core-j8 for pom-all > > > Key: CASSANDRA-14422 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14422 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Shichao An >Assignee: Shichao An >Priority: Minor > Attachments: deps-tree-311-no_patch.txt > > > I found two missing dependencies for pom-all (cassandra-all): > * airline > * ohc-core-j8 > > This doesn't affect current build scheme because their jars are hardcoded in > the lib directory. However, if we depend on cassandra-all in our downstream > projects to resolve and fetch dependencies (instead of using the official > tarball), Cassandra will have problems, e.g. airline is required by nodetool, > and it will fail our dtests. > I will attach the patch shortly -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14388) Fix setting min/max compaction threshold with LCS
[ https://issues.apache.org/jira/browse/CASSANDRA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-14388: Reviewer: Alex Petrov (was: Chris Lohfink) setting [~ifesdjeen] as reviewer Pushed a commit with an updated NEWS.txt entry - we could make MAX_COMPACTING_L0 configurable later if someone thinks it is necessary but 32 seems to be a good value right now, especially after this since it will only decide when to run a STCS compaction > Fix setting min/max compaction threshold with LCS > - > > Key: CASSANDRA-14388 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14388 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > Fix For: 4.x > > > To be able to actually set max/min_threshold in compaction options we need to > remove it from the options map when validating. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14478) Improve the documentation of UPDATE vs INSERT
Nadav Har'El created CASSANDRA-14478: Summary: Improve the documentation of UPDATE vs INSERT Key: CASSANDRA-14478 URL: https://issues.apache.org/jira/browse/CASSANDRA-14478 Project: Cassandra Issue Type: Improvement Components: Documentation and Website Reporter: Nadav Har'El New Cassandra users often wonder about the difference between the INSERT and UPDATE cql commands when applied to ordinary data (not counters or transactions). Usually, they are told them that there is really no difference between the two - both of them can insert a new row or update an existing one. The Cassandra CQL documentation [http://cassandra.apache.org/doc/latest/cql/dml.html#update|http://cassandra.apache.org/doc/latest/cql/dml.html#update,] is fairly silent on the question - on the one hand it doesn't explicitly say they are the same, but on the other hand describes them both as doing the same things, and doesn't explicitly mention any difference. But there is an important difference, which was raised in the past in CASSANDRA-11805: INSERT adds a row marker, while UPDATE does not. What does this mean? Basically an UPDATE requests that individual cells of the row be added, but not that the row itself be added; So if one later deletes the same individual cells with DELETE, the entire row goes away. However, an "INSERT" not only adds the cells, it also requests that the row be added (this is implemented via a "row marker"). So if later all the row's individual cells are deleted, an empty row remains behind (i.e., the primary of the row which now has no content is still remembered in the table). I'm not sure what is the best way to explain this, but what I wrote in the paragraph above is a start. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14388) Fix setting min/max compaction threshold with LCS
[ https://issues.apache.org/jira/browse/CASSANDRA-14388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16494790#comment-16494790 ] Alex Petrov commented on CASSANDRA-14388: - Should we add a news entry about the fact that {{MAX_COMPACTING_L0}} will now be overridden by the CFS max compaction threshold? And/or a ticket to make {{MAX_COMPACTING_L0}} configurable. > Fix setting min/max compaction threshold with LCS > - > > Key: CASSANDRA-14388 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14388 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > Fix For: 4.x > > > To be able to actually set max/min_threshold in compaction options we need to > remove it from the options map when validating. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Comment Edited] (CASSANDRA-12526) For LCS, single SSTable up-level is handled inefficiently
[ https://issues.apache.org/jira/browse/CASSANDRA-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16493423#comment-16493423 ] Alex Petrov edited comment on CASSANDRA-12526 at 5/30/18 7:18 AM: -- Thank you for the great patch and sorry for the long time to review. I have mostly small nits/comments as patch looks solid and so far all testing was yielding good results. * In [metadataChanged|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-c24601ca8b77db9628351c9c8ac83979R299] and [SSTableMetadataChanged notification|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-6cf05413c8d72c15fbbd512ce21ddca0R28] it might make sense to add new metadata along with the old metadata. * Should we add a (possibly smaller/shorter) negative test? E.g. make sure that under same conditions as in {{compactionTest}} but with {{single_sstable_uplevel: false}} we get a "normal" compaction task instead. * there are shortcuts for {{getDefaultCFS().disableAutoCompaction()}} and {{getDefaultCFS().forceBlockingFlush()}} in {{CQLTester}}. There are multiple places, for example [here|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-39f5a435ed2a85b43174405802edcdbaR75] we could use those. * [here|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-403d518f40817cabdab5449071a41b50R165] we could re-order conditionals and short-circuit by {{singleSSTableUplevel}}, since it if this feature isn't on we won't ever get to the second clause. * Might be good to add a short comment or indicate in the name that {{SingleSSTableLCSTask}} is kind of no-op (doesn't perform a real compaction. * We've also discussed offline that [mutateLevel|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-d6d1a843ef8c25484d740d82a4746644R75] is safe here since sstables are marked as compacting and file rename is atomic. Since this patch includes [CASSANDRA-14388], we probably should get it committed before we commit this one. Let me know what you think. was (Author: ifesdjeen): Thank you for the great patch and sorry for the long time to review. I have mostly small nits/comments as patch looks solid and so far all testing was yielding good results. * In [metadataChanged|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-c24601ca8b77db9628351c9c8ac83979R299] and [SSTableMetadataChanged notification|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-6cf05413c8d72c15fbbd512ce21ddca0R28] it might make sense to add new metadata along with the old metadata. * Should we add a (possibly smaller/shorter) negative test? E.g. make sure that under same conditions as in {{compactionTest}} but with {{single_sstable_uplevel: false}} we get a "normal" compaction task instead. * there are shortcuts for {{getDefaultCFS().disableAutoCompaction()}} and {{getDefaultCFS().forceBlockingFlush()}} in {{CQLTester}}. There are multiple places, for example [here|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-39f5a435ed2a85b43174405802edcdbaR75] we could use those. * [here|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-403d518f40817cabdab5449071a41b50R165] we could re-order conditionals and short-circuit by {{singleSSTableUplevel}}, since it if this feature isn't on we won't ever get to the second clause. * Might be good to add a short comment or indicate in the name that {{SingleSSTableLCSTask}} is kind of no-op (doesn't perform a real compaction. * We've also discussed offline that [mutateLevel|https://github.com/apache/cassandra/compare/trunk...krummas:marcuse/12526#diff-d6d1a843ef8c25484d740d82a4746644R75] is safe here since sstables are marked as compacting and file rename is atomic. Let me know what you think. > For LCS, single SSTable up-level is handled inefficiently > - > > Key: CASSANDRA-12526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12526 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Wei Deng >Assignee: Marcus Eriksson >Priority: Major > Labels: compaction, lcs, performance > Fix For: 4.x > > > I'm using the latest trunk (as of August 2016, which probably is going to be > 3.10) to run some experiments on LeveledCompactionStrategy and noticed this > inefficiency. > The test data is generated using cassandra-stress default parameters > (keyspace1.standard1), so as you can imagine, it consists of a ton of newly > inserted partitions that will never merge in compactions, which is probably > the worst kind of wor
[jira] [Updated] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly
[ https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent White updated CASSANDRA-14477: -- Status: Patch Available (was: Open) > The check of num_tokens against the length of inital_token in the yaml > triggers unexpectedly > > > Key: CASSANDRA-14477 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14477 > Project: Cassandra > Issue Type: Bug >Reporter: Vincent White >Priority: Minor > > In CASSANDRA-10120 we added a check that compares num_tokens against the > number of tokens supplied in the yaml via initial_token. From my reading of > CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained > contradictory values for num_tokens and initial_tokens which should help > prevent misconfiguration via human error. The current behaviour appears to > differ slightly in that it performs this comparison regardless of whether > num_tokens is included in the yaml or not. Below are proposed patches to only > perform the check if both options are present in the yaml. > ||Branch|| > |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]| > |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]| -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14397) Stop compactions quicker when compacting wide partitions
[ https://issues.apache.org/jira/browse/CASSANDRA-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-14397: Reviewer: Alex Petrov > Stop compactions quicker when compacting wide partitions > > > Key: CASSANDRA-14397 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14397 > Project: Cassandra > Issue Type: Improvement >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > Fix For: 4.x > > > We should allow compactions to be stopped when compacting wide partitions, > this will help when a user wants to run upgradesstables for example. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Created] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly
Vincent White created CASSANDRA-14477: - Summary: The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly Key: CASSANDRA-14477 URL: https://issues.apache.org/jira/browse/CASSANDRA-14477 Project: Cassandra Issue Type: Bug Reporter: Vincent White In CASSANDRA-10120 we added a check that compares num_tokens against the number of tokens supplied in the yaml via initial_token. From my reading of CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained contradictory values for num_tokens and initial_tokens which should help prevent misconfiguration via human error. The current behaviour appears to differ slightly in that it performs this comparison regardless of whether num_tokens is included in the yaml or not. Below are proposed patches to only perform the check if both options are present in the yaml. ||Branch|| |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]| |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]| -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Assigned] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly
[ https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vincent White reassigned CASSANDRA-14477: - Assignee: Vincent White > The check of num_tokens against the length of inital_token in the yaml > triggers unexpectedly > > > Key: CASSANDRA-14477 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14477 > Project: Cassandra > Issue Type: Bug >Reporter: Vincent White >Assignee: Vincent White >Priority: Minor > > In CASSANDRA-10120 we added a check that compares num_tokens against the > number of tokens supplied in the yaml via initial_token. From my reading of > CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained > contradictory values for num_tokens and initial_tokens which should help > prevent misconfiguration via human error. The current behaviour appears to > differ slightly in that it performs this comparison regardless of whether > num_tokens is included in the yaml or not. Below are proposed patches to only > perform the check if both options are present in the yaml. > ||Branch|| > |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]| > |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]| -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org