[jira] [Updated] (CASSANDRA-13548) system.paxos performance improvements for LWT
[ https://issues.apache.org/jira/browse/CASSANDRA-13548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13548: - Summary: system.paxos performance improvements for LWT (was: system.paxos improvements) > system.paxos performance improvements for LWT > - > > Key: CASSANDRA-13548 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13548 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jeff Jirsa >Priority: Minor > Fix For: 4.x > > > There are a few practical changes we can make to {{system.paxos}} that will > improve (especially read) performance for LWT: > - We should decrease the compression chunk size for situations where we have > to go to disk > - We can change the primary key structure so that the row key and CFID are > both part of the partition key, which will decrease LCS compaction activity > in use cases where a row key is common across tables, and one table is > updated more frequently than the other. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13502) Don't overwrite the DefaultUncaughtExceptionHandler when testing
[ https://issues.apache.org/jira/browse/CASSANDRA-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13502: - Component/s: (was: Core) Testing > Don't overwrite the DefaultUncaughtExceptionHandler when testing > > > Key: CASSANDRA-13502 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13502 > Project: Cassandra > Issue Type: Improvement > Components: Testing >Reporter: vincent royer >Priority: Minor > Fix For: 3.0.x, 3.11.x, 4.x > > Attachments: > 0010-Don-t-overwrite-the-DefaultUncaughtExceptionHandler-.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > To be able to run some maven unit tests, set the default exception handler if > the system property tests.maven is not defined (another property name could > be used). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13850) Select stack size in cassandra-env.sh based on architecture
[ https://issues.apache.org/jira/browse/CASSANDRA-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13850: - Component/s: (was: Core) > Select stack size in cassandra-env.sh based on architecture > --- > > Key: CASSANDRA-13850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13850 > Project: Cassandra > Issue Type: Improvement > Components: Configuration >Reporter: Amitkumar Ghatwal >Priority: Minor > Fix For: 4.x > > > Hi All, > Added support for arch in "cassandra-env.sh " with PR : > https://github.com/apache/cassandra/pull/149 > Regards, > Amit -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13850) Select stack size in cassandra-env.sh based on architecture
[ https://issues.apache.org/jira/browse/CASSANDRA-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13850: - Summary: Select stack size in cassandra-env.sh based on architecture (was: Modifying "cassandra-env.sh") > Select stack size in cassandra-env.sh based on architecture > --- > > Key: CASSANDRA-13850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13850 > Project: Cassandra > Issue Type: Improvement > Components: Configuration, Core >Reporter: Amitkumar Ghatwal >Priority: Minor > Fix For: 4.x > > > Hi All, > Added support for arch in "cassandra-env.sh " with PR : > https://github.com/apache/cassandra/pull/149 > Regards, > Amit -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-13974) Bad prefix matching when figuring out data directory for an sstable
[ https://issues.apache.org/jira/browse/CASSANDRA-13974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16690793#comment-16690793 ] C. Scott Andreas commented on CASSANDRA-13974: -- Related to "CASSANDRA-14013: Data loss in snapshots keyspace after service restart", in which a user reported data loss in a keyspace called "snapshots" > Bad prefix matching when figuring out data directory for an sstable > --- > > Key: CASSANDRA-13974 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13974 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > Fix For: 3.0.x, 3.11.x, 4.x > > > We do a "startsWith" check when getting data directory for an sstable, we > should match including File.separator -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14415) Performance regression in queries for distinct keys
[ https://issues.apache.org/jira/browse/CASSANDRA-14415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14415: - Component/s: (was: Core) Local Write-Read Paths > Performance regression in queries for distinct keys > --- > > Key: CASSANDRA-14415 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14415 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths >Reporter: Samuel Klock >Assignee: Samuel Klock >Priority: Major > Labels: performance > Fix For: 3.0.x, 3.11.x, 4.x > > > Running Cassandra 3.0.16, we observed a major performance regression > affecting {{SELECT DISTINCT keys}}-style queries against certain tables. > Based on some investigation (guided by some helpful feedback from Benjamin on > the dev list), we tracked the regression down to two problems. > * One is that Cassandra was reading more data from disk than was necessary > to satisfy the query. This was fixed under CASSANDRA-10657 in a later 3.x > release. > * If the fix for CASSANDRA-10657 is incorporated, the other is this code > snippet in {{RebufferingInputStream}}: > {code:java} > @Override > public int skipBytes(int n) throws IOException > { > if (n < 0) > return 0; > int requested = n; > int position = buffer.position(), limit = buffer.limit(), remaining; > while ((remaining = limit - position) < n) > { > n -= remaining; > buffer.position(limit); > reBuffer(); > position = buffer.position(); > limit = buffer.limit(); > if (position == limit) > return requested - n; > } > buffer.position(position + n); > return requested; > } > {code} > The gist of it is that to skip bytes, the stream needs to read those bytes > into memory then throw them away. In our tests, we were spending a lot of > time in this method, so it looked like the chief drag on performance. > We noticed that the subclass of {{RebufferingInputStream}} in use for our > queries, {{RandomAccessReader}} (over compressed sstables), implements a > {{seek()}} method. Overriding {{skipBytes()}} in it to use {{seek()}} > instead was sufficient to fix the performance regression. > The performance difference is significant for tables with large values. It's > straightforward to evaluate with very simple key-value tables, e.g.: > {{CREATE TABLE testtable (key TEXT PRIMARY KEY, value BLOB);}} > We did some basic experimentation with the following variations (all in a > single-node 3.11.2 cluster with off-the-shelf settings running on a dev > workstation): > * small values (1 KB, 100,000 entries), somewhat larger values (25 KB, > 10,000 entries), and much larger values (1 MB, 10,000 entries); > * compressible data (a single byte repeated) and uncompressible data (output > from {{openssl rand $bytes}}); and > * with and without sstable compression. (With compression, we use > Cassandra's defaults.) > The difference is most conspicuous for tables with large, uncompressible data > and sstable decompression (which happens to describe the use case that > triggered our investigation). It is smaller but still readily apparent for > tables with effective compression. For uncompressible data without > compression enabled, there is no appreciable difference. > Here's what the performance looks like without our patch for the 1-MB entries > (times in seconds, five consecutive runs for each data set, all exhausting > the results from a {{SELECT DISTINCT key FROM ...}} query with a page size of > 24): > {noformat} > working on compressible > 5.21180510521 > 5.10270500183 > 5.22311806679 > 4.6732840538 > 4.84219098091 > working on uncompressible_uncompressed > 55.0423607826 > 0.769015073776 > 0.850513935089 > 0.713396072388 > 0.62596988678 > working on uncompressible > 413.292617083 > 231.345913887 > 449.524993896 > 425.135111094 > 243.469946861 > {noformat} > and with the fix: > {noformat} > working on compressible > 2.86733293533 > 1.24895811081 > 1.108907938 > 1.12742400169 > 1.04647302628 > working on uncompressible_uncompressed > 56.4146180153 > 0.895509958267 > 0.922824144363 > 0.772884130478 > 0.731923818588 > working on uncompressible > 64.4587619305 > 1.81325793266 > 1.52577018738 > 1.41769099236 > 1.60442209244 > {noformat} > The long initial runs for the uncompressible data presumably come from > repeatedly hitting the disk. In contrast to the runs without the fix, the > initial runs seem to be effective at warming the page cache (as lots of data > is skipped, so the data that's read can fit in memory), so subsequent runs > are faster. > For smaller da
[jira] [Updated] (CASSANDRA-14731) Transient Write Metrics
[ https://issues.apache.org/jira/browse/CASSANDRA-14731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14731: - Component/s: Metrics > Transient Write Metrics > --- > > Key: CASSANDRA-14731 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14731 > Project: Cassandra > Issue Type: Improvement > Components: Core, Metrics >Reporter: Benedict >Priority: Minor > Labels: metrics, transient-replication > Fix For: 4.x > > > While we record the number of attempt transient writes, we do not record how > successful these were. > Also, we do not count transient writes that happen due to the failure > detector. Possibly, these While these are distinct from those writes that > happen ‘speculatively’ due to slow responses, there’s a strong chance they > will be the most common form of transient write. It might be worth having > separate -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-11575) Add out-of-process testing for CDC
[ https://issues.apache.org/jira/browse/CASSANDRA-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-11575: - Component/s: (was: Local Write-Read Paths) (was: Coordination) Testing > Add out-of-process testing for CDC > -- > > Key: CASSANDRA-11575 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11575 > Project: Cassandra > Issue Type: Sub-task > Components: Testing >Reporter: Carl Yeksigian >Assignee: Joshua McKenzie >Priority: Major > Fix For: 4.x > > Attachments: 11575.tgz, 11575.tgz > > > There are currently no dtests for the new cdc feature. We should have some, > at least to ensure that the cdc files have a lifecycle that makes sense, and > make sure that things like a continually cleaning daemon and a lazy daemon > have the properties we expect; for this, we don't need to actually process > the files, but make sure they fit the characteristics we expect from them. A > more complex daemon would need to be written in Java. > I already hit a problem where if the cdc is over capacity, the cdc properly > throws the WTE, but it will not reset after the overflow directory is > undersize again. It is supposed to correct the size within 250ms and allow > more writes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13611) Reduce cost/frequency of digest mismatches in quorum reads
[ https://issues.apache.org/jira/browse/CASSANDRA-13611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13611: - Summary: Reduce cost/frequency of digest mismatches in quorum reads (was: Digest mismatch in Quorum read) > Reduce cost/frequency of digest mismatches in quorum reads > -- > > Key: CASSANDRA-13611 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13611 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Dikang Gu >Assignee: Dikang Gu >Priority: Major > Fix For: 4.x > > > In current implementation, when we issue a quorum read, C* will send full > data request to one replica, and send digest requests to other replicas. If > the digest mismatch, C* will send another round of full data request to all > replicas. > In our environment, we find in P99 case, digest are always mismatch, so we > are doing 2 round trips of requests in P99, which hurts our P99 latency a lot. > We propose that in quorum read case, we send full data replicas to the quorum > replicas directly, reconcile them and send back to client. In our experiment > , it reduced the P99 latency by 20% ~ 30%. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Commented] (CASSANDRA-14482) ZSTD Compressor support in Cassandra
[ https://issues.apache.org/jira/browse/CASSANDRA-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16690790#comment-16690790 ] C. Scott Andreas commented on CASSANDRA-14482: -- [~sushm...@gmail.com] Thanks again for your work on this ticket and your presentation! I've updated the fix version to 4.x as 3.x releases are currently intended for bug fixes. > ZSTD Compressor support in Cassandra > > > Key: CASSANDRA-14482 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14482 > Project: Cassandra > Issue Type: New Feature > Components: Compression, Libraries >Reporter: Sushma A Devendrappa >Assignee: Sushma A Devendrappa >Priority: Major > Labels: performance > Fix For: 4.x > > > ZStandard has a great speed and compression ratio tradeoff. > ZStandard is open source compression from Facebook. > More about ZSTD > [https://github.com/facebook/zstd] > https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14482) ZSTD Compressor support in Cassandra
[ https://issues.apache.org/jira/browse/CASSANDRA-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14482: - Fix Version/s: (was: 3.11.x) > ZSTD Compressor support in Cassandra > > > Key: CASSANDRA-14482 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14482 > Project: Cassandra > Issue Type: New Feature > Components: Compression, Libraries >Reporter: Sushma A Devendrappa >Assignee: Sushma A Devendrappa >Priority: Major > Labels: performance > Fix For: 4.x > > > ZStandard has a great speed and compression ratio tradeoff. > ZStandard is open source compression from Facebook. > More about ZSTD > [https://github.com/facebook/zstd] > https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14482) ZSTD Compressor support in Cassandra
[ https://issues.apache.org/jira/browse/CASSANDRA-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14482: - Issue Type: New Feature (was: Wish) > ZSTD Compressor support in Cassandra > > > Key: CASSANDRA-14482 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14482 > Project: Cassandra > Issue Type: New Feature > Components: Compression, Libraries >Reporter: Sushma A Devendrappa >Assignee: Sushma A Devendrappa >Priority: Major > Labels: performance > Fix For: 4.x > > > ZStandard has a great speed and compression ratio tradeoff. > ZStandard is open source compression from Facebook. > More about ZSTD > [https://github.com/facebook/zstd] > https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/ > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-5108) expose overall progress of cleanup tasks in jmx
[ https://issues.apache.org/jira/browse/CASSANDRA-5108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-5108: Component/s: Observability > expose overall progress of cleanup tasks in jmx > --- > > Key: CASSANDRA-5108 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5108 > Project: Cassandra > Issue Type: New Feature > Components: Compaction, Observability >Affects Versions: 1.2.0 >Reporter: Michael Kjellman >Priority: Minor > Labels: lhf > Fix For: 4.x > > > it would be nice if, upon starting a cleanup operation, cassandra could > maintain a Set (i assume this already exists as we have to know which file to > act on next) and a new set of "completed" sstables. When each is compacted > remove it from the pending list. That way C* could give an overall completion > of the long running and pending cleanup tasks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14654) Reduce heap pressure during compactions
[ https://issues.apache.org/jira/browse/CASSANDRA-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14654: - Status: Patch Available (was: Open) Marking "Patch Available" on behalf of [~cnlwsu]: https://github.com/clohfink/cassandra/tree/compaction_allocs > Reduce heap pressure during compactions > --- > > Key: CASSANDRA-14654 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14654 > Project: Cassandra > Issue Type: Improvement > Components: Compaction >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Major > Labels: Performance, pull-request-available > Fix For: 4.x > > Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, > screenshot-4.png > > Time Spent: 40m > Remaining Estimate: 0h > > Small partition compactions are painfully slow with a lot of overhead per > partition. There also tends to be an excess of objects created (ie > 200-700mb/s) per compaction thread. > The EncoderStats walks through all the partitions and with mergeWith it will > create a new one per partition as it walks the potentially millions of > partitions. In a test scenario of about 600byte partitions and a couple 100mb > of data this consumed ~16% of the heap pressure. Changing this to instead > mutably track the min values and create one in a EncodingStats.Collector > brought this down considerably (but not 100% since the > UnfilteredRowIterator.stats() still creates 1 per partition). > The KeyCacheKey makes a full copy of the underlying byte array in > ByteBufferUtil.getArray in its constructor. This is the dominating heap > pressure as there are more sstables. By changing this to just keeping the > original it completely eliminates the current dominator of the compactions > and also improves read performance. > Minor tweak included for this as well for operators when compactions are > behind on low read clusters is to make the preemptive opening setting a > hotprop. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14773) Overflow of 32-bit integer during compaction.
[ https://issues.apache.org/jira/browse/CASSANDRA-14773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14773: - Status: Patch Available (was: Open) Marking "Patch Available on behalf of [~vladimir.bukhtoyarov]: [https://github.com/apache/cassandra/pull/273] > Overflow of 32-bit integer during compaction. > - > > Key: CASSANDRA-14773 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14773 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Vladimir Bukhtoyarov >Assignee: Vladimir Bukhtoyarov >Priority: Critical > Fix For: 4.x > > > In scope of CASSANDRA-13444 the compaction was significantly improved from > CPU and memory perspective. Hovewer this improvement introduces the bug in > rounding. When rounding the expriration time which is close to > *Cell.MAX_DELETION_TIME*(it is just *Integer.MAX_VALUE*) the math overflow > happens(because in scope of -CASSANDRA-13444-) data type for point was > changed from Long to Integer in order to reduce memory footprint), as result > point became negative and acts as silent poison for internal structures of > StreamingTombstoneHistogramBuilder like *DistanceHolder* and *DataHolder*. > Then depending of point intervals: > * The TombstoneHistogram produces wrong values when interval of points is > less then binSize, it is not critical. > * Compaction crashes with ArrayIndexOutOfBoundsException if amount of point > intervals is great then binSize, this case is very critical. > > This is pull request [https://github.com/apache/cassandra/pull/273] that > reproduces the issue and provides the fix. > > The stacktrace when running(on codebase without fix) > *testMathOverflowDuringRoundingOfLargeTimestamp* without -ea JVM flag > {noformat} > java.lang.ArrayIndexOutOfBoundsException > at java.lang.System.arraycopy(Native Method) > at > org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$DistanceHolder.add(StreamingTombstoneHistogramBuilder.java:208) > at > org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushValue(StreamingTombstoneHistogramBuilder.java:140) > at > org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$$Lambda$1/1967205423.consume(Unknown > Source) > at > org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder$Spool.forEach(StreamingTombstoneHistogramBuilder.java:574) > at > org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.flushHistogram(StreamingTombstoneHistogramBuilder.java:124) > at > org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilder.build(StreamingTombstoneHistogramBuilder.java:184) > at > org.apache.cassandra.utils.streamhist.StreamingTombstoneHistogramBuilderTest.testMathOverflowDuringRoundingOfLargeTimestamp(StreamingTombstoneHistogramBuilderTest.java:183) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:44) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) > at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) > at org.junit.runners.ParentRunner.run(ParentRunner.java:220) > at org.junit.runner.JUnitCore.run(JUnitCore.java:159) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
[jira] [Updated] (CASSANDRA-13500) Fix String default Locale with a javassit transformer
[ https://issues.apache.org/jira/browse/CASSANDRA-13500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13500: - Fix Version/s: (was: 3.11.x) (was: 3.0.x) > Fix String default Locale with a javassit transformer > - > > Key: CASSANDRA-13500 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13500 > Project: Cassandra > Issue Type: Improvement > Components: Build, Core >Reporter: vincent royer >Priority: Minor > Fix For: 4.x > > Attachments: > 0008-Fix-String-default-Locale-with-a-javassit-transforme.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > Several String related methods like java.lang.String.format() use implicitly > the default Locale, causing bugs with some Locale values. This byte-code > manipulation in build.xml explicitly set the Locale to Locale.ROOT in all > String related calls in Cassandra classes. For details, see > https://github.com/strapdata/maven-javassist/blob/master/javassist-maven > -plugin-core/src/main/java/com/strapdata/transformer/StringLocaleTransfo > rmer.java -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14788) Add test coverage workflows to CircleCI config
[ https://issues.apache.org/jira/browse/CASSANDRA-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14788: - Component/s: (was: 4.0) > Add test coverage workflows to CircleCI config > -- > > Key: CASSANDRA-14788 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14788 > Project: Cassandra > Issue Type: Improvement > Components: Build >Reporter: Jon Meredith >Assignee: Jon Meredith >Priority: Minor > Labels: pull-request-available > Fix For: 4.0 > > Time Spent: 40m > Remaining Estimate: 0h > > To support 4.0 testing efforts it's helpful to know how much of the code is > being exercised by unit tests and dtests. > Add support for running the unit tests and dtests instrumented for test > coverage on CircleCI and then combine the results of all tests (unit, dtest > with vnodes, dtest without vnodes) into a single coverage report. > All of the hard work of getting JaCoCo to work with unit tests and dtests has > already been done, it just needs wiring up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14695) preview repair should correctly handle transient ranges
[ https://issues.apache.org/jira/browse/CASSANDRA-14695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14695: - Component/s: (was: 4.0) Repair > preview repair should correctly handle transient ranges > --- > > Key: CASSANDRA-14695 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14695 > Project: Cassandra > Issue Type: Bug > Components: Repair >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Major > Fix For: 4.0 > > > Preview repairs don't exclude transient replicas when validating repaired > data. This will cause validation repairs on transient keyspaces to always > report inconsistency -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14883) Let Cassandra support the new JVM, Eclipse Openj9.
[ https://issues.apache.org/jira/browse/CASSANDRA-14883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14883: - Component/s: (was: 4.0) > Let Cassandra support the new JVM, Eclipse Openj9. > -- > > Key: CASSANDRA-14883 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14883 > Project: Cassandra > Issue Type: Improvement > Components: Packaging > Environment: jdk8u192-b12_openj9-0.11.0 > cassandra 4.0.0_beta_20181109_build >Reporter: Lee Sangboo >Priority: Major > Fix For: 4.0.x > > Attachments: jamm-0.3.2.jar, jamm.zip > > > Cassandra does not currently support the new JVM, Eclipse Openj9. In internal > testing, Openj9 outperforms Hotspot. I have deployed a modified jamm library > that has a problem with the current startup, but when I started Cassandra, I > got a log message saying "Non-Oracle JVM detected." Some features, such as > unimported compact SSTables, may not work as intended "If there is no > problem, I would also like to delete the above message. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14788) Add test coverage workflows to CircleCI config
[ https://issues.apache.org/jira/browse/CASSANDRA-14788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14788: - Fix Version/s: 4.0 > Add test coverage workflows to CircleCI config > -- > > Key: CASSANDRA-14788 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14788 > Project: Cassandra > Issue Type: Improvement > Components: Build >Reporter: Jon Meredith >Assignee: Jon Meredith >Priority: Minor > Labels: pull-request-available > Fix For: 4.0 > > Time Spent: 40m > Remaining Estimate: 0h > > To support 4.0 testing efforts it's helpful to know how much of the code is > being exercised by unit tests and dtests. > Add support for running the unit tests and dtests instrumented for test > coverage on CircleCI and then combine the results of all tests (unit, dtest > with vnodes, dtest without vnodes) into a single coverage report. > All of the hard work of getting JaCoCo to work with unit tests and dtests has > already been done, it just needs wiring up. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14695) preview repair should correctly handle transient ranges
[ https://issues.apache.org/jira/browse/CASSANDRA-14695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14695: - Fix Version/s: 4.0 > preview repair should correctly handle transient ranges > --- > > Key: CASSANDRA-14695 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14695 > Project: Cassandra > Issue Type: Bug > Components: 4.0 >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Major > Fix For: 4.0 > > > Preview repairs don't exclude transient replicas when validating repaired > data. This will cause validation repairs on transient keyspaces to always > report inconsistency -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14820) Upgrade to 4.0 fails with NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14820: - Component/s: (was: 4.0) > Upgrade to 4.0 fails with NullPointerException > -- > > Key: CASSANDRA-14820 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14820 > Project: Cassandra > Issue Type: Bug >Reporter: Tommy Stendahl >Assignee: Ariel Weisberg >Priority: Major > Fix For: 4.0 > > > I tested to upgrade an existing cluster to latest 4.0 but it fails with a > NullPointerException, I upgraded from 3.0.15 but upgrading from any 3.0.x or > 3.11.x to 4.0 will give the same fault. > {noformat} > > 2018-10-12T11:27:02.261+0200 ERROR [main] CassandraDaemon.java:251 Error > while loading schema: > java.lang.NullPointerException: null > at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:156) > at > org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:41) > at > org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:28) > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:116) > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:267) > at > org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:997) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:973) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:927) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:886) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:877) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:865) > at org.apache.cassandra.schema.Schema.loadFromDisk(Schema.java:102) > at org.apache.cassandra.schema.Schema.loadFromDisk(Schema.java:91) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:247) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:590) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) > {noformat} > The problem seams to be line 997 in SchemaKeyspace.java > > {noformat} > .speculativeWriteThreshold(SpeculativeRetryPolicy.fromString(row.getString("speculative_write_threshold"{noformat} > speculative_write_threshold is a new table option introduced in > CASSANDRA-14404, when upgrading the table option is missing and we get a > NullPointerException on this line. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14820) Upgrade to 4.0 fails with NullPointerException
[ https://issues.apache.org/jira/browse/CASSANDRA-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14820: - Fix Version/s: 4.0 > Upgrade to 4.0 fails with NullPointerException > -- > > Key: CASSANDRA-14820 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14820 > Project: Cassandra > Issue Type: Bug >Reporter: Tommy Stendahl >Assignee: Ariel Weisberg >Priority: Major > Fix For: 4.0 > > > I tested to upgrade an existing cluster to latest 4.0 but it fails with a > NullPointerException, I upgraded from 3.0.15 but upgrading from any 3.0.x or > 3.11.x to 4.0 will give the same fault. > {noformat} > > 2018-10-12T11:27:02.261+0200 ERROR [main] CassandraDaemon.java:251 Error > while loading schema: > java.lang.NullPointerException: null > at org.apache.cassandra.utils.ByteBufferUtil.string(ByteBufferUtil.java:156) > at > org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:41) > at > org.apache.cassandra.serializers.AbstractTextSerializer.deserialize(AbstractTextSerializer.java:28) > at > org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:116) > at > org.apache.cassandra.cql3.UntypedResultSet$Row.getString(UntypedResultSet.java:267) > at > org.apache.cassandra.schema.SchemaKeyspace.createTableParamsFromRow(SchemaKeyspace.java:997) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchTable(SchemaKeyspace.java:973) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchTables(SchemaKeyspace.java:927) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspace(SchemaKeyspace.java:886) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchKeyspacesWithout(SchemaKeyspace.java:877) > at > org.apache.cassandra.schema.SchemaKeyspace.fetchNonSystemKeyspaces(SchemaKeyspace.java:865) > at org.apache.cassandra.schema.Schema.loadFromDisk(Schema.java:102) > at org.apache.cassandra.schema.Schema.loadFromDisk(Schema.java:91) > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:247) > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:590) > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:679) > {noformat} > The problem seams to be line 997 in SchemaKeyspace.java > > {noformat} > .speculativeWriteThreshold(SpeculativeRetryPolicy.fromString(row.getString("speculative_write_threshold"{noformat} > speculative_write_threshold is a new table option introduced in > CASSANDRA-14404, when upgrading the table option is missing and we get a > NullPointerException on this line. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[1/6] cassandra git commit: fix failing test
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 9eee7aa78 -> 60a8cfe11 refs/heads/cassandra-3.11 5431b87ed -> 2ed7c6a6b refs/heads/trunk f22fec927 -> 521542ff2 fix failing test Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60a8cfe1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60a8cfe1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60a8cfe1 Branch: refs/heads/cassandra-3.0 Commit: 60a8cfe115b78cee7e4d8024984fa1f8367685db Parents: 9eee7aa Author: Blake Eggleston Authored: Fri Nov 16 14:35:22 2018 -0800 Committer: Blake Eggleston Committed: Sat Nov 17 16:01:09 2018 -0800 -- .../org/apache/cassandra/db/SinglePartitionSliceCommandTest.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/60a8cfe1/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java -- diff --git a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java index 2891687..ca0dfa5 100644 --- a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java +++ b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java @@ -395,7 +395,7 @@ public class SinglePartitionSliceCommandTest SelectStatement stmt = (SelectStatement) QueryProcessor.parseStatement(q).prepare(ClientState.forInternalCalls()).statement; List unfiltereds = new ArrayList<>(); -SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, FBUtilities.nowInSeconds()); +SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, 0); Assert.assertEquals(1, query.commands.size()); SinglePartitionReadCommand command = Iterables.getOnlyElement(query.commands); try (ReadOrderGroup group = ReadOrderGroup.forCommand(command); - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[3/6] cassandra git commit: fix failing test
fix failing test Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60a8cfe1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60a8cfe1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60a8cfe1 Branch: refs/heads/trunk Commit: 60a8cfe115b78cee7e4d8024984fa1f8367685db Parents: 9eee7aa Author: Blake Eggleston Authored: Fri Nov 16 14:35:22 2018 -0800 Committer: Blake Eggleston Committed: Sat Nov 17 16:01:09 2018 -0800 -- .../org/apache/cassandra/db/SinglePartitionSliceCommandTest.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/60a8cfe1/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java -- diff --git a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java index 2891687..ca0dfa5 100644 --- a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java +++ b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java @@ -395,7 +395,7 @@ public class SinglePartitionSliceCommandTest SelectStatement stmt = (SelectStatement) QueryProcessor.parseStatement(q).prepare(ClientState.forInternalCalls()).statement; List unfiltereds = new ArrayList<>(); -SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, FBUtilities.nowInSeconds()); +SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, 0); Assert.assertEquals(1, query.commands.size()); SinglePartitionReadCommand command = Iterables.getOnlyElement(query.commands); try (ReadOrderGroup group = ReadOrderGroup.forCommand(command); - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[4/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ed7c6a6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ed7c6a6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ed7c6a6 Branch: refs/heads/trunk Commit: 2ed7c6a6b8f747806b2dfa8e9919582306ed6522 Parents: 5431b87 60a8cfe Author: Blake Eggleston Authored: Sat Nov 17 16:01:39 2018 -0800 Committer: Blake Eggleston Committed: Sat Nov 17 16:01:39 2018 -0800 -- .../org/apache/cassandra/db/SinglePartitionSliceCommandTest.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ed7c6a6/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java -- diff --cc test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java index 1bdbcb2,ca0dfa5..97855a6 --- a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java +++ b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java @@@ -404,11 -395,11 +404,11 @@@ public class SinglePartitionSliceComman SelectStatement stmt = (SelectStatement) QueryProcessor.parseStatement(q).prepare(ClientState.forInternalCalls()).statement; List unfiltereds = new ArrayList<>(); - SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, FBUtilities.nowInSeconds()); + SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, 0); Assert.assertEquals(1, query.commands.size()); SinglePartitionReadCommand command = Iterables.getOnlyElement(query.commands); -try (ReadOrderGroup group = ReadOrderGroup.forCommand(command); - UnfilteredPartitionIterator partitions = command.executeLocally(group)) +try (ReadExecutionController controller = ReadExecutionController.forCommand(command); + UnfilteredPartitionIterator partitions = command.executeLocally(controller)) { assert partitions.hasNext(); try (UnfilteredRowIterator partition = partitions.next()) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[2/6] cassandra git commit: fix failing test
fix failing test Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/60a8cfe1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/60a8cfe1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/60a8cfe1 Branch: refs/heads/cassandra-3.11 Commit: 60a8cfe115b78cee7e4d8024984fa1f8367685db Parents: 9eee7aa Author: Blake Eggleston Authored: Fri Nov 16 14:35:22 2018 -0800 Committer: Blake Eggleston Committed: Sat Nov 17 16:01:09 2018 -0800 -- .../org/apache/cassandra/db/SinglePartitionSliceCommandTest.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/60a8cfe1/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java -- diff --git a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java index 2891687..ca0dfa5 100644 --- a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java +++ b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java @@ -395,7 +395,7 @@ public class SinglePartitionSliceCommandTest SelectStatement stmt = (SelectStatement) QueryProcessor.parseStatement(q).prepare(ClientState.forInternalCalls()).statement; List unfiltereds = new ArrayList<>(); -SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, FBUtilities.nowInSeconds()); +SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, 0); Assert.assertEquals(1, query.commands.size()); SinglePartitionReadCommand command = Iterables.getOnlyElement(query.commands); try (ReadOrderGroup group = ReadOrderGroup.forCommand(command); - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[6/6] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/521542ff Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/521542ff Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/521542ff Branch: refs/heads/trunk Commit: 521542ff26f9482b733e4f0f86281f07c3af29da Parents: f22fec9 2ed7c6a Author: Blake Eggleston Authored: Sat Nov 17 16:09:00 2018 -0800 Committer: Blake Eggleston Committed: Sat Nov 17 16:09:00 2018 -0800 -- .../org/apache/cassandra/db/SinglePartitionSliceCommandTest.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/521542ff/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java -- diff --cc test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java index f28bf41,97855a6..5dd408b --- a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java +++ b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java @@@ -339,12 -401,12 +339,12 @@@ public class SinglePartitionSliceComman public static List getUnfilteredsFromSinglePartition(String q) { -SelectStatement stmt = (SelectStatement) QueryProcessor.parseStatement(q).prepare(ClientState.forInternalCalls()).statement; +SelectStatement stmt = (SelectStatement) QueryProcessor.parseStatement(q).prepare(ClientState.forInternalCalls()); List unfiltereds = new ArrayList<>(); - SinglePartitionReadQuery.Group query = (SinglePartitionReadQuery.Group) stmt.getQuery(QueryOptions.DEFAULT, FBUtilities.nowInSeconds()); -SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, 0); -Assert.assertEquals(1, query.commands.size()); -SinglePartitionReadCommand command = Iterables.getOnlyElement(query.commands); ++SinglePartitionReadQuery.Group query = (SinglePartitionReadQuery.Group) stmt.getQuery(QueryOptions.DEFAULT, 0); +Assert.assertEquals(1, query.queries.size()); +SinglePartitionReadCommand command = Iterables.getOnlyElement(query.queries); try (ReadExecutionController controller = ReadExecutionController.forCommand(command); UnfilteredPartitionIterator partitions = command.executeLocally(controller)) { - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[5/6] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2ed7c6a6 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2ed7c6a6 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2ed7c6a6 Branch: refs/heads/cassandra-3.11 Commit: 2ed7c6a6b8f747806b2dfa8e9919582306ed6522 Parents: 5431b87 60a8cfe Author: Blake Eggleston Authored: Sat Nov 17 16:01:39 2018 -0800 Committer: Blake Eggleston Committed: Sat Nov 17 16:01:39 2018 -0800 -- .../org/apache/cassandra/db/SinglePartitionSliceCommandTest.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2ed7c6a6/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java -- diff --cc test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java index 1bdbcb2,ca0dfa5..97855a6 --- a/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java +++ b/test/unit/org/apache/cassandra/db/SinglePartitionSliceCommandTest.java @@@ -404,11 -395,11 +404,11 @@@ public class SinglePartitionSliceComman SelectStatement stmt = (SelectStatement) QueryProcessor.parseStatement(q).prepare(ClientState.forInternalCalls()).statement; List unfiltereds = new ArrayList<>(); - SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, FBUtilities.nowInSeconds()); + SinglePartitionReadCommand.Group query = (SinglePartitionReadCommand.Group) stmt.getQuery(QueryOptions.DEFAULT, 0); Assert.assertEquals(1, query.commands.size()); SinglePartitionReadCommand command = Iterables.getOnlyElement(query.commands); -try (ReadOrderGroup group = ReadOrderGroup.forCommand(command); - UnfilteredPartitionIterator partitions = command.executeLocally(group)) +try (ReadExecutionController controller = ReadExecutionController.forCommand(command); + UnfilteredPartitionIterator partitions = command.executeLocally(controller)) { assert partitions.hasNext(); try (UnfilteredRowIterator partition = partitions.next()) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-8576: Component/s: Core > Primary Key Pushdown For Hadoop > --- > > Key: CASSANDRA-8576 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Russell Spitzer >Assignee: Alex Liu >Priority: Major > Fix For: 2.2.x > > Attachments: 8576-2.1-branch.txt, 8576-trunk.txt, > CASSANDRA-8576-v1-2.2-branch.txt, CASSANDRA-8576-v2-2.1-branch.txt, > CASSANDRA-8576-v3-2.1-branch.txt > > > I've heard reports from several users that they would like to have predicate > pushdown functionality for hadoop (Hive in particular) based services. > Example usecase > Table with wide partitions, one per customer > Application team has HQL they would like to run on a single customer > Currently time to complete scales with number of customers since Input Format > can't pushdown primary key predicate > Current implementation requires a full table scan (since it can't recognize > that a single partition was specified) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10023) Emit a metric for number of local read and write calls
[ https://issues.apache.org/jira/browse/CASSANDRA-10023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-10023: - Component/s: Metrics > Emit a metric for number of local read and write calls > -- > > Key: CASSANDRA-10023 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10023 > Project: Cassandra > Issue Type: Improvement > Components: Metrics >Reporter: sankalp kohli >Assignee: Damien Stevenson >Priority: Minor > Labels: 4.0-feature-freeze-review-requested, lhf > Fix For: 4.x > > Attachments: 10023-trunk-dtests.txt, 10023-trunk.txt, > CASSANDRA-10023.patch > > > Many C* drivers have feature to be replica aware and chose the co-ordinator > which is a replica. We should add a metric which tells us whether all calls > to the co-ordinator are replica aware. > We have seen issues where client thinks they are replica aware when they > forget to add routing key at various places in the code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-11323) When node runs out of commitlog space you get poor log information
[ https://issues.apache.org/jira/browse/CASSANDRA-11323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-11323: - Component/s: Observability > When node runs out of commitlog space you get poor log information > -- > > Key: CASSANDRA-11323 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11323 > Project: Cassandra > Issue Type: Bug > Components: Observability >Reporter: T Jake Luciani >Assignee: Boris Onufriyev >Priority: Trivial > Labels: fallout > Attachments: 11323-2.2.txt, 11323-3.0.txt, 11323-3.11.txt, > 11323-trunk.txt > > > {code} > ERROR [PERIODIC-COMMIT-LOG-SYNCER] 2016-03-08 20:27:33,899 > StorageService.java:470 - Stopping gossiper > WARN [PERIODIC-COMMIT-LOG-SYNCER] 2016-03-08 20:27:33,899 > StorageService.java:377 - Stopping gossip by operator request > INFO [PERIODIC-COMMIT-LOG-SYNCER] 2016-03-08 20:27:33,899 Gossiper.java:1463 > - Announcing shutdown > {code} > That's all you get when a node runs out of commit log space. > We should explicitly callout the fact the commitlog is out of disk. I see > that in the commit log error handler but after it shuts down. So I think it's > never getting written before shutdown. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-5901) Bootstrap should also make the data consistent on the new node
[ https://issues.apache.org/jira/browse/CASSANDRA-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-5901: Component/s: Streaming and Messaging > Bootstrap should also make the data consistent on the new node > -- > > Key: CASSANDRA-5901 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5901 > Project: Cassandra > Issue Type: Improvement > Components: Streaming and Messaging >Reporter: sankalp kohli >Assignee: Marcus Eriksson >Priority: Minor > Fix For: 4.x > > > Currently when we are bootstrapping a new node, it might bootstrap from a > node which does not have most upto date data. Because of this, we need to run > a repair after that. > Most people will always run the repair so it would help if we can provide a > parameter to bootstrap to run the repair once the bootstrap has finished. > It can also stop the node from responding to reads till repair has finished. > This could be another param as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-9167) Improve bloom-filter false-positive-ratio
[ https://issues.apache.org/jira/browse/CASSANDRA-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-9167: Component/s: Core > Improve bloom-filter false-positive-ratio > - > > Key: CASSANDRA-9167 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9167 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Minor > Labels: perfomance > > {{org.apache.cassandra.utils.BloomCalculations}} performs some table lookups > to calculate the bloom filter specification (size, # of hashes). Using the > exact maths for that computation brings a better false-positive-ratio (the > maths usually returns higher numbers for hash-counts). > TL;DR increasing the number of hash-rounds brings a nice improvement. Finally > it's a trade-off between CPU and I/O. > ||false-positive-chance||elements||capacity||hash count > new||false-positive-ratio new||hash count current||false-positive-ratio > current||improvement > |0.1|1|50048|3|0.0848|3|0.0848|0 > |0.1|10|500032|3|0.09203|3|0.09203|0 > |0.1|100|564|3|0.0919|3|0.0919|0 > |0.1|1000|5064|3|0.09182|3|0.09182|0 > |0.1|1|50064|3|0.091874|3|0.091874|0 > |0.01|1|100032|7|0.0092|5|0.0107|0.1630434783 > |0.01|10|164|7|0.00818|5|0.00931|0.1381418093 > |0.01|100|1064|7|0.008072|5|0.009405|0.1651387512 > |0.01|1000|10064|7|0.008174|5|0.009375|0.146929288 > |0.01|1|100064|7|0.008197|5|0.009428|0.150176894 > |0.001|1|150080|10|0.0008|7|0.001|0.25 > |0.001|10|1500032|10|0.0006|7|0.00094|0.57 > |0.001|100|1564|10|0.000717|7|0.000991|0.3821478382 > |0.001|1000|15064|10|0.000743|7|0.000992|0.33512786 > |0.001|1|150064|10|0.000741|7|0.001002|0.3522267206 > |0.0001|1|200064|13|0|10|0.0002|#DIV/0! > |0.0001|10|264|13|0.4|10|0.0001|1.5 > |0.0001|100|2064|13|0.75|10|0.91|0.21 > |0.0001|1000|20064|13|0.69|10|0.87|0.2608695652 > |0.0001|1|200064|13|0.68|10|0.9|0.3235294118 > If we decide to allow more hash-rounds, it could be nicely back-ported even > to 2.0 without affecting existing sstables. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-6538) Provide a read-time CQL function to display the data size of columns and rows
[ https://issues.apache.org/jira/browse/CASSANDRA-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-6538: Component/s: CQL > Provide a read-time CQL function to display the data size of columns and rows > - > > Key: CASSANDRA-6538 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6538 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Johnny Miller >Priority: Minor > Labels: cql > Attachments: 6538-v2.patch, 6538.patch, CodeSnippet.txt, sizeFzt.PNG > > > It would be extremely useful to be able to work out the size of rows and > columns via CQL. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-8272) 2ndary indexes can return stale data
[ https://issues.apache.org/jira/browse/CASSANDRA-8272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-8272: Component/s: Secondary Indexes > 2ndary indexes can return stale data > > > Key: CASSANDRA-8272 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8272 > Project: Cassandra > Issue Type: Bug > Components: Secondary Indexes >Reporter: Sylvain Lebresne >Assignee: Andrés de la Peña >Priority: Major > Fix For: 3.0.x > > > When replica return 2ndary index results, it's possible for a single replica > to return a stale result and that result will be sent back to the user, > potentially failing the CL contract. > For instance, consider 3 replicas A, B and C, and the following situation: > {noformat} > CREATE TABLE test (k int PRIMARY KEY, v text); > CREATE INDEX ON test(v); > INSERT INTO test(k, v) VALUES (0, 'foo'); > {noformat} > with every replica up to date. Now, suppose that the following queries are > done at {{QUORUM}}: > {noformat} > UPDATE test SET v = 'bar' WHERE k = 0; > SELECT * FROM test WHERE v = 'foo'; > {noformat} > then, if A and B acknowledge the insert but C respond to the read before > having applied the insert, then the now stale result will be returned (since > C will return it and A or B will return nothing). > A potential solution would be that when we read a tombstone in the index (and > provided we make the index inherit the gcGrace of it's parent CF), instead of > skipping that tombstone, we'd insert in the result a corresponding range > tombstone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13095) Timeouts between nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-13095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13095: - Component/s: Streaming and Messaging > Timeouts between nodes > -- > > Key: CASSANDRA-13095 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13095 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Danil Smirnov >Assignee: Danil Smirnov >Priority: Minor > Attachments: 13095-2.1.patch > > > Recently I've run into a problem with heavily loaded cluster when sometimes > messages between certain nodes become blocked with no reason. > It looks like the same situation that described here > https://issues.apache.org/jira/browse/CASSANDRA-12676?focusedCommentId=15736166&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15736166 > Thread dump showed infinite loop here: > https://github.com/apache/cassandra/blob/a8a43dd32eb92406d7d8b105e08c68b3d5c7df49/src/java/org/apache/cassandra/utils/CoalescingStrategies.java#L109 > Apparently the problem is in the initial value of epoch filed in > TimeHorizonMovingAverageCoalescingStrategy class. When it's value is not > evenly divisible by BUCKET_INTERVAL, ix(epoch-1) does not point to the > correct bucket. As a result, sum gradually increases and, upon reaching > MEASURED_INTERVAL, averageGap becomes 0 and thread blocks. > It's hard to reproduce because it takes a long time for sum to grow and when > no messages are send for some time, sum becomes 0 > https://github.com/apache/cassandra/blob/a8a43dd32eb92406d7d8b105e08c68b3d5c7df49/src/java/org/apache/cassandra/utils/CoalescingStrategies.java#L301 > and bug is no longer reproducible (until connection between nodes is > re-created). > I've added a patch which should fix the problem. Don't know if it would be of > any help since CASSANDRA-12676 will apparently disable this behaviour. One > note about performance regressions though. There is a small chance it being > result of the bug described here, so it might be worth testing performance > after fixes and/or tuning the algorithm. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-8596) Display datacenter/rack info for offline nodes - PropertyFileSnitch
[ https://issues.apache.org/jira/browse/CASSANDRA-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-8596: Component/s: Distributed Metadata > Display datacenter/rack info for offline nodes - PropertyFileSnitch > --- > > Key: CASSANDRA-8596 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8596 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Vovodroid >Priority: Minor > Attachments: ByteBufferUtils.diff, file_snitch.patch > > > When using GossipPropertyFileSnitch "nodetool status" shows default (from > cassandra-topology.properties ) datacenter/rack for offline nodes. > It happens because offline nodes are not in endpointMap, and thus > getRawEndpointInfo returns default DC/rack is returned > (PropertyFileSnitch.java). > I suggest to take info for those nodes from system.peers tables - just like > SELECT data_center,rack FROM system.peers WHERE peer='10.0.0.1' > Patch attached. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-9206) Remove seed gossip probability
[ https://issues.apache.org/jira/browse/CASSANDRA-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-9206: Component/s: Distributed Metadata > Remove seed gossip probability > -- > > Key: CASSANDRA-9206 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9206 > Project: Cassandra > Issue Type: Improvement > Components: Distributed Metadata >Reporter: Brandon Williams >Assignee: Brandon Williams >Priority: Major > Fix For: 3.11.x > > Attachments: 9206.txt > > > Currently, we use probability to determine whether a node will gossip with a > seed: > {noformat} > double probability = seeds.size() / (double) > (liveEndpoints.size() + unreachableEndpoints.size()); > double randDbl = random.nextDouble(); > if (randDbl <= probability) > sendGossip(prod, seeds); > {noformat} > I propose that we remove this probability, and instead *always* gossip with a > seed. This of course means increased traffic and processing on the seed(s), > but even a 1000 node cluster with a single seed will only put ~1000 messages > per second on the seed, which is virtually nothing. Should it become a > problem, the solution is simple: add more seeds. Since seeds will also > always gossip with each other, this effectively gives us a poor man's > spanning tree, with the only cost being removing a few lines of code, and > should greatly improve our gossip convergence time, especially in large > clusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-9387) Add snitch supporting Windows Azure
[ https://issues.apache.org/jira/browse/CASSANDRA-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-9387: Component/s: Core > Add snitch supporting Windows Azure > --- > > Key: CASSANDRA-9387 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9387 > Project: Cassandra > Issue Type: New Feature > Components: Configuration, Core >Reporter: Jonathan Ellis >Assignee: Yoshua Wakeham >Priority: Major > Fix For: 4.x > > > Looks like regions / fault domains are a pretty close analogue to C* > DCs/racks. > http://blogs.technet.com/b/yungchou/archive/2011/05/16/window-azure-fault-domain-and-update-domain-explained-for-it-pros.aspx -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-10983) Metrics for tracking offending queries
[ https://issues.apache.org/jira/browse/CASSANDRA-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-10983: - Component/s: Observability > Metrics for tracking offending queries > -- > > Key: CASSANDRA-10983 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10983 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Sharvanath Pathak >Priority: Major > Labels: github-import > Fix For: 2.1.x > > > I have seen big GC pauses leading to nodes being marked DOWN in our cluster. > The most common issue is someone, would add a large range scan and it would > be difficult to pin-point the specific query. I have added a mechanism to > account the memory allocation for a specific query. In order to allow > aggregates over a period I added a metric as well. Attached is the diff. > I was wondering if something like this would be interesting for more general > audience. There are some things which need to be fixed for proper release. > For instance, Cleaning up existing metrics on server restart. However, just > wanted to check before that if something like this would be useful for others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-11927) dtest failure in replication_test.ReplicationTest.simple_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-11927: - Component/s: Testing > dtest failure in replication_test.ReplicationTest.simple_test > - > > Key: CASSANDRA-11927 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11927 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Paulo Motta >Priority: Major > Labels: dtest > Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, > node3.log, node3_debug.log > > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/387/testReport/replication_test/ReplicationTest/simple_test > Failed on CassCI build trunk_novnode_dtest #387 > Logs are attached. > Unexpected error in question: > {code} > ERROR [SharedPool-Worker-1] 2016-05-30 16:00:17,211 Keyspace.java:504 - > Attempting to mutate non-existant table 99f5be60-267f-11e6-ad5f-f13d771494ea > (test.test) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-12849) The parameter -XX:HeapDumpPath is not ovewritten by cassandra-env.sh
[ https://issues.apache.org/jira/browse/CASSANDRA-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-12849: - Component/s: Configuration > The parameter -XX:HeapDumpPath is not ovewritten by cassandra-env.sh > - > > Key: CASSANDRA-12849 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12849 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: jean carlo rivera ura >Priority: Major > Labels: lhf > Attachments: 12849-trunk.txt > > > The parameter -XX:HeapDumpPath appears twice in the java process > {panel} > user@node:~$ sudo ps aux | grep --color HeapDumpPath > java -ea -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar > -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities > -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M -Xmn200M > -XX:+HeapDumpOnOutOfMemoryError > -XX:*HeapDumpPath*=/var/lib/cassandra-1477577769-pid1516.hprof -Xss256k > ... > -XX:*HeapDumpPath*=/home/cassandra/java_1477577769.hprof > -XX:ErrorFile=/var/lib/cassandra/hs_err_1477577769.log > org.apache.cassandra.service.CassandraDaemon > {panel} > The problem is when we have an OOM error, the JVM dump goes to > */home/cassandra/java_1477577769.hprof * when the correct behavior is to go > to the path defined by cassandra-env.sh > */var/lib/cassandra-1477577769-pid1516.hprof* > This is quite annoying because cassandra takes into account only the path > defined by the script init (usually that disk is not that big to keep 8Gb of > a heap dump) and not the path defined in cassandra-env.sh > {noformat} > user@node:~$ jmx4perl http://localhost:8523/jolokia read > com.sun.management:type=HotSpotDiagnostic DiagnosticOptions > { > name => 'HeapDumpPath', > origin => 'VM_CREATION', > value => '/home/cassandra/java_1477043835.hprof', > writeable => '[true]' > }, > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-12995) update hppc dependency to 0.7
[ https://issues.apache.org/jira/browse/CASSANDRA-12995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-12995: - Component/s: Packaging Libraries > update hppc dependency to 0.7 > - > > Key: CASSANDRA-12995 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12995 > Project: Cassandra > Issue Type: Improvement > Components: Libraries, Packaging >Reporter: Tomas Repik >Priority: Major > Labels: easyfix > Fix For: 4.0 > > Attachments: cassandra-3.11.0-hppc.patch > > > Cassandra 3.11.0 is about to be included in Fedora. There are some tweaks to > the sources we need to do in order to successfully build it. Cassandra > depends on hppc 0.5.4, but In Fedora we have the newer version 0.7.1 Upstream > released even newer version 0.7.2. I attached a patch updating cassandra > sources to depend on the 0.7.1 hppc sources. It should be also compatible > with the newest upstream version. The only actual changes are the removal of > Open infix in class names. The issue was discussed in here: > https://bugzilla.redhat.com/show_bug.cgi?id=1340876 Please consider updating. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13261) Improve speculative retry to avoid being overloaded
[ https://issues.apache.org/jira/browse/CASSANDRA-13261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13261: - Component/s: Coordination > Improve speculative retry to avoid being overloaded > --- > > Key: CASSANDRA-13261 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13261 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Simon Zhou >Assignee: Simon Zhou >Priority: Major > Attachments: CASSANDRA-13261-v1.patch > > > In CASSANDRA-13009, I was suggested to separate the 2nd part of my patch as > an improvement. > This is to avoid Cassandra being overloaded when using CUSTOM speculative > retry parameter. Steps to reason/repro this with 3.0.10: > 1. Use custom speculative retry threshold like this: > cqlsh> alter TABLE to_repair1.users0 with speculative_retry='10ms'; > 2. SpeculatingReadExecutor will be used, according to this piece of code in > AbstractReadExecutor: > {code} > if (retry.equals(SpeculativeRetryParam.ALWAYS)) > return new AlwaysSpeculatingReadExecutor(keyspace, cfs, command, > consistencyLevel, targetReplicas); > else // PERCENTILE or CUSTOM. > return new SpeculatingReadExecutor(keyspace, cfs, command, > consistencyLevel, targetReplicas); > {code} > 3. When RF=3 and LOCAL_QUORUM is used, the below code (from > SpeculatingReadExecutor#maybeTryAdditionalReplicas) won't be able to protect > Cassandra from being overloaded, even though the inline comment suggests such > intention: > {code} > // no latency information, or we're overloaded > if (cfs.sampleLatencyNanos > > TimeUnit.MILLISECONDS.toNanos(command.getTimeout())) > return; > {code} > The reason is that cfs.sampleLatencyNanos is assigned as > retryPolicy.threshold() which is 10ms in step #1 above, at line 405 of > ColumnFamilyStore. However pretty often the timeout is the default one 5000ms. > As the name suggests, sampleLatencyNanos should be used to keep sampled > latency, not something configured "statically". My proposal: > a. Introduce option -Dcassandra.overload.threshold to allow customizing > overload threshold. The default threshold would be > DatabaseDescriptor.getRangeRpcTimeout(). > b. Assign sampled P99 latency to cfs.sampleLatencyNanos. For overload > detection, we just compare cfs.sampleLatencyNanos with the customizable > threshold above. > c. Use retryDelayNanos (instead of cfs.sampleLatencyNanos) for waiting time > before retry (see line 282 of AbstractReadExecutor). This is the value from > table setting (PERCENTILE or CUSTOM). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13432) MemtableReclaimMemory can get stuck because of lack of timeout in getTopLevelColumns()
[ https://issues.apache.org/jira/browse/CASSANDRA-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13432: - Component/s: Core > MemtableReclaimMemory can get stuck because of lack of timeout in > getTopLevelColumns() > -- > > Key: CASSANDRA-13432 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13432 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: cassandra 2.1.15 >Reporter: Corentin Chary >Priority: Major > Fix For: 2.1.x > > > This might affect 3.x too, I'm not sure. > {code} > $ nodetool tpstats > Pool NameActive Pending Completed Blocked All > time blocked > MutationStage 0 0 32135875 0 > 0 > ReadStage 114 0 29492940 0 > 0 > RequestResponseStage 0 0 86090931 0 > 0 > ReadRepairStage 0 0 166645 0 > 0 > CounterMutationStage 0 0 0 0 > 0 > MiscStage 0 0 0 0 > 0 > HintedHandoff 0 0 47 0 > 0 > GossipStage 0 0 188769 0 > 0 > CacheCleanupExecutor 0 0 0 0 > 0 > InternalResponseStage 0 0 0 0 > 0 > CommitLogArchiver 0 0 0 0 > 0 > CompactionExecutor0 0 86835 0 > 0 > ValidationExecutor0 0 0 0 > 0 > MigrationStage0 0 0 0 > 0 > AntiEntropyStage 0 0 0 0 > 0 > PendingRangeCalculator0 0 92 0 > 0 > Sampler 0 0 0 0 > 0 > MemtableFlushWriter 0 0563 0 > 0 > MemtablePostFlush 0 0 1500 0 > 0 > MemtableReclaimMemory 129534 0 > 0 > Native-Transport-Requests41 0 54819182 0 > 1896 > {code} > {code} > "MemtableReclaimMemory:195" - Thread t@6268 >java.lang.Thread.State: WAITING > at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304) > at > org.apache.cassandra.utils.concurrent.WaitQueue$AbstractSignal.awaitUninterruptibly(WaitQueue.java:283) > at > org.apache.cassandra.utils.concurrent.OpOrder$Barrier.await(OpOrder.java:417) > at > org.apache.cassandra.db.ColumnFamilyStore$Flush$1.runMayThrow(ColumnFamilyStore.java:1151) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) >Locked ownable synchronizers: > - locked <6e7b1160> (a java.util.concurrent.ThreadPoolExecutor$Worker) > "SharedPool-Worker-195" - Thread t@989 >java.lang.Thread.State: RUNNABLE > at > org.apache.cassandra.db.RangeTombstoneList.addInternal(RangeTombstoneList.java:690) > at > org.apache.cassandra.db.RangeTombstoneList.insertFrom(RangeTombstoneList.java:650) > at > org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:171) > at > org.apache.cassandra.db.RangeTombstoneList.add(RangeTombstoneList.java:143) > at org.apache.cassandra.db.DeletionInfo.add(DeletionInfo.java:240) > at > org.apache.cassandra.db.ArrayBackedSortedColumns.delete(ArrayBackedSortedColumns.java:483) > at org.apache.cassandra.db.ColumnFamily.addAtom(ColumnFamily.java:153) > at > org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:184) > at > org.apache.cassan
[jira] [Updated] (CASSANDRA-14102) Vault support for transparent data encryption
[ https://issues.apache.org/jira/browse/CASSANDRA-14102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14102: - Component/s: Core > Vault support for transparent data encryption > - > > Key: CASSANDRA-14102 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14102 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Stefan Podkowinski >Priority: Major > Labels: encryption, security > Fix For: 4.x > > Attachments: patches-14102.tar > > > Transparent data encryption provided by CASSANDRA-9945 can currently be used > for commitlog and hints. The default {{KeyProvider}} implementation that we > ship allows to use a local keystore for storing and retrieving keys. Thanks > to the pluggable handling of the {{KeyStore}} provider and basic Vault > related classes introduced in CASSANDRA-13971, a Vault based implementation > can be provided as {{KeyProvider}} as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13404) Hostname verification for client-to-node encryption
[ https://issues.apache.org/jira/browse/CASSANDRA-13404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13404: - Component/s: Streaming and Messaging > Hostname verification for client-to-node encryption > --- > > Key: CASSANDRA-13404 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13404 > Project: Cassandra > Issue Type: New Feature > Components: Streaming and Messaging >Reporter: Jan Karlsson >Assignee: Per Otterström >Priority: Major > Labels: security > Fix For: 4.x > > Attachments: 13404-trunk-v2.patch, 13404-trunk.txt > > > Similarily to CASSANDRA-9220, Cassandra should support hostname verification > for client-node connections. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13464) Failed to create Materialized view with a specific token range
[ https://issues.apache.org/jira/browse/CASSANDRA-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13464: - Component/s: Materialized Views > Failed to create Materialized view with a specific token range > -- > > Key: CASSANDRA-13464 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13464 > Project: Cassandra > Issue Type: Bug > Components: Materialized Views >Reporter: Natsumi Kojima >Assignee: Krishna Dattu Koneru >Priority: Minor > Labels: materializedviews > > Failed to create Materialized view with a specific token range. > Example : > {code:java} > $ ccm create "MaterializedView" -v 3.0.13 > $ ccm populate -n 3 > $ ccm start > $ ccm status > Cluster: 'MaterializedView' > --- > node1: UP > node3: UP > node2: UP > $ccm node1 cqlsh > Connected to MaterializedView at 127.0.0.1:9042. > [cqlsh 5.0.1 | Cassandra 3.0.13 | CQL spec 3.4.0 | Native protocol v4] > Use HELP for help. > cqlsh> CREATE KEYSPACE test WITH replication = {'class':'SimpleStrategy', > 'replication_factor':3}; > cqlsh> CREATE TABLE test.test ( id text PRIMARY KEY , value1 text , value2 > text, value3 text); > $ccm node1 ring test > Datacenter: datacenter1 > == > AddressRackStatus State LoadOwns > Token > > 3074457345618258602 > 127.0.0.1 rack1 Up Normal 64.86 KB100.00% > -9223372036854775808 > 127.0.0.2 rack1 Up Normal 86.49 KB100.00% > -3074457345618258603 > 127.0.0.3 rack1 Up Normal 89.04 KB100.00% > 3074457345618258602 > $ ccm node1 cqlsh > cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('aaa', > 'aaa', 'aaa' ,'aaa'); > cqlsh> INSERT INTO test.test (id, value1 , value2, value3 ) VALUES ('bbb', > 'bbb', 'bbb' ,'bbb'); > cqlsh> SELECT token(id),id,value1 FROM test.test; > system.token(id) | id | value1 > --+-+ > -4737872923231490581 | aaa |aaa > -3071845237020185195 | bbb |bbb > (2 rows) > cqlsh> CREATE MATERIALIZED VIEW test.test_view AS SELECT value1, id FROM > test.test WHERE id IS NOT NULL AND value1 IS NOT NULL AND TOKEN(id) > > -9223372036854775808 AND TOKEN(id) < -3074457345618258603 PRIMARY KEY(value1, > id) WITH CLUSTERING ORDER BY (id ASC); > ServerError: java.lang.ClassCastException: > org.apache.cassandra.cql3.TokenRelation cannot be cast to > org.apache.cassandra.cql3.SingleColumnRelation > {code} > Stacktrace : > {code:java} > INFO [MigrationStage:1] 2017-04-19 18:32:48,131 ColumnFamilyStore.java:389 - > Initializing test.test > WARN [SharedPool-Worker-1] 2017-04-19 18:44:07,263 FBUtilities.java:337 - > Trigger directory doesn't exist, please create it and try again. > ERROR [SharedPool-Worker-1] 2017-04-19 18:46:10,072 QueryMessage.java:128 - > Unexpected error during query > java.lang.ClassCastException: org.apache.cassandra.cql3.TokenRelation cannot > be cast to org.apache.cassandra.cql3.SingleColumnRelation > at > org.apache.cassandra.db.view.View.relationsToWhereClause(View.java:275) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.statements.CreateViewStatement.announceMigration(CreateViewStatement.java:219) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.statements.SchemaAlteringStatement.execute(SchemaAlteringStatement.java:93) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:206) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:237) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:222) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) > ~[apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513) > [apache-cassandra-3.0.13.jar:3.0.13] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407) > [apache-cassandra-3.0.13.jar:3.0.13] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.44.Final.jar:4.0.44.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357) > [netty-all-4.0.44.Final.jar:4.0.44.Final] > at > io.netty.channel.AbstractChannelHandlerContext.acc
[jira] [Updated] (CASSANDRA-13577) Fix dynamic endpoint snitch for sub-millisecond use case
[ https://issues.apache.org/jira/browse/CASSANDRA-13577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13577: - Component/s: Streaming and Messaging > Fix dynamic endpoint snitch for sub-millisecond use case > > > Key: CASSANDRA-13577 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13577 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Simon Zhou >Assignee: Simon Zhou >Priority: Major > Fix For: 3.0.x > > > This is a follow up of https://issues.apache.org/jira/browse/CASSANDRA-6908. > After disabling severity (CASSANDRA-11737/CASSANDRA-11738) in a few > production clusters, I observed that the scores for all the endpoints are > mostly 0.0. Through debugging, I found this is caused by that these clusters > have p50 latency well below 1ms and the network latency is also <0.1ms (round > trip). Be noted that we use p50 sampled read latency and millisecond as time > unit. That means, if the latency is mostly below 1ms, the score will be 0. > This is definitely not something we want. To make DES work for these > sub-millisecond use cases, we should change the timeunit to at least > microsecond, or even nanosecond. I'll provide a patch soon. > Evidence of the p50 latency: > {code} > nodetool tablehistograms > Percentile SSTables Write Latency Read LatencyPartition Size > Cell Count > (micros) (micros) (bytes) > > 50% 2.00 35.43454.83 20501 > 3 > 75% 2.00 42.51654.95 29521 > 3 > 95% 3.00182.79943.13 61214 > 3 > 98% 4.00263.21 1131.75 73457 > 3 > 99% 4.00315.85 1358.10 88148 > 3 > Min 0.00 9.89 11.8761 > 3 > Max 5.00654.95 129557.75943127 > 3 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13600) sstabledump possible problem
[ https://issues.apache.org/jira/browse/CASSANDRA-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13600: - Component/s: Tools > sstabledump possible problem > > > Key: CASSANDRA-13600 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13600 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Official cassandra docker image (last) under Win10 >Reporter: a8775 >Assignee: Varun Barala >Priority: Major > Labels: patch > Fix For: 3.11.x > > Attachments: CASSANDRA-13600.patch > > > h2. Possible bug in sstabledump > {noformat} > cqlsh> show version > [cqlsh 5.0.1 | Cassandra 3.10 | CQL spec 3.4.4 | Native protocol v4] > {noformat} > h2. Execute script in cqlsh in new keyspace > {noformat} > CREATE TABLE IF NOT EXISTS test_data ( > // partitioning key > PK TEXT, > // data > Data TEXT, > > PRIMARY KEY (PK) > ); > insert into test_data(PK,Data) values('0',''); > insert into test_data(PK,Data) values('1',''); > insert into test_data(PK,Data) values('2',''); > delete from test_data where PK='1'; > insert into test_data(PK,Data) values('1',''); > {noformat} > h2. Execute the following commands > {noformat} > nodetool flush > nodetool compact > sstabledump mc-2-big-Data.db > sstabledump -d mc-2-big-Data.db > {noformat} > h3. default dump - missing data for partiotion key = "1" > {noformat} > [ > { > "partition" : { > "key" : [ "0" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 15, > "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.529389Z" }, > "cells" : [ > { "name" : "data", "value" : "" } > ] > } > ] > }, > { > "partition" : { > "key" : [ "2" ], > "position" : 26 > }, > "rows" : [ > { > "type" : "row", > "position" : 41, > "liveness_info" : { "tstamp" : "2017-06-14T12:23:13.544132Z" }, > "cells" : [ > { "name" : "data", "value" : "" } > ] > } > ] > }, > { > "partition" : { > "key" : [ "1" ], > "position" : 53, > "deletion_info" : { "marked_deleted" : "2017-06-14T12:23:13.545988Z", > "local_delete_time" : "2017-06-14T12:23:13Z" } > } > } > ] > {noformat} > h3. dump with -d option - correct data for partiotion key = "1" > {noformat} > [0]@0 Row[info=[ts=1497442993529389] ]: | [data= ts=1497442993529389] > [2]@26 Row[info=[ts=1497442993544132] ]: | [data= ts=1497442993544132] > [1]@53 deletedAt=1497442993545988, localDeletion=1497442993 > [1]@53 Row[info=[ts=1497442993550159] ]: | [data= ts=1497442993550159] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13841) Allow specific sources during rebuild
[ https://issues.apache.org/jira/browse/CASSANDRA-13841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13841: - Component/s: Streaming and Messaging > Allow specific sources during rebuild > - > > Key: CASSANDRA-13841 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13841 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Kurt Greaves >Assignee: Kurt Greaves >Priority: Minor > Labels: 4.0-feature-freeze-review-requested > > CASSANDRA-10406 introduced the ability to rebuild specific ranges, and > CASSANDRA-9875 extended that to allow specifying a set of hosts to stream > from. It's not incredibly clear why you would only want to stream a subset of > ranges, but a possible use case for this functionality is to rebuild a node > from targeted replicas. > When doing a DC migration, if you are using racks==RF while rebuilding you > can ensure you rebuild from each copy of a replica in the source datacenter > by specifying all the hosts from a single rack to rebuild a single copy from. > This can be repeated for each rack in the new datacenter to ensure you have > each copy of the replica from the source DC, and thus maintaining consistency > through rebuilds. > For example, with the following topology for DC A and B with an RF of A:3 and > B:3 > ||A ||B|| > ||Node||Rack||Node||Rack|| > |A1|rack1| B1|rack1| > |A2|rack2| B2|rack2| > |A3|rack3| B3|rack3| > The following set of actions will result in having exactly 1 copy of every > replica in A in B, and B will be _at least_ as consistent as A. > {code:java} > Rebuild B1 from only A1 > Rebuild B2 from only A2 > Rebuild B3 from only A3 > {code} > Unfortunately using this functionality is non-trivial at the moment, as you > can only specify specific sources WITH the nodes set of tokens to rebuild > from. To perform the above with vnodes/a large cluster, you will have to > specify every token range in the -ts arg, which quickly gets > unwieldy/impossible if you have a large cluster. > A solution to this is to simply filter on sources first, before processing > ranges. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13720) Clean up repair code
[ https://issues.apache.org/jira/browse/CASSANDRA-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13720: - Component/s: Repair > Clean up repair code > > > Key: CASSANDRA-13720 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13720 > Project: Cassandra > Issue Type: Improvement > Components: Repair >Reporter: Simon Zhou >Assignee: Simon Zhou >Priority: Major > Fix For: 4.0.x > > > Lots of unused code. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-13838) Ensure all threads are FastThreadLocal.removeAll() is called for all threads
[ https://issues.apache.org/jira/browse/CASSANDRA-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-13838: - Component/s: Core > Ensure all threads are FastThreadLocal.removeAll() is called for all threads > > > Key: CASSANDRA-13838 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13838 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Major > > There are a couple of places, there it's not guaranteed that > FastThreadLocal.removeAll() is called. Most misses are actually not that > critical, but the miss for the thread created via in > org.apache.cassandra.streaming.ConnectionHandler.MessageHandler#start(java.net.Socket, > int, boolean) could be critical, because these threads are created for every > stream-session. > (Follow-up from CASSANDRA-13754) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14155) [TRUNK] Gossiper somewhat frequently hitting an NPE on node startup with dtests at org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769)
[ https://issues.apache.org/jira/browse/CASSANDRA-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14155: - Component/s: Testing > [TRUNK] Gossiper somewhat frequently hitting an NPE on node startup with > dtests at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > > > Key: CASSANDRA-14155 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14155 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Testing >Reporter: Michael Kjellman >Assignee: Jason Brown >Priority: Major > Labels: dtest > > Gossiper is somewhat frequently hitting an NPE on node startup with dtests at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > {code} > test teardown failure > Unexpected error found in node logs (see stdout for full details). Errors: > [ERROR [main] 2018-01-08 21:41:01,832 CassandraDaemon.java:675 - Exception > encountered during startup > java.lang.NullPointerException: null > at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:511) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:761) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:621) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:568) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:360) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:658) > [main/:na], ERROR [main] 2018-01-08 21:41:01,832 CassandraDaemon.java:675 - > Exception encountered during startup > java.lang.NullPointerException: null > at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:511) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:761) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:621) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:568) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:360) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:658) > [main/:na]] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14013) Data loss in snapshots keyspace after service restart
[ https://issues.apache.org/jira/browse/CASSANDRA-14013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14013: - Component/s: Core > Data loss in snapshots keyspace after service restart > - > > Key: CASSANDRA-14013 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14013 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Gregor Uhlenheuer >Assignee: Vincent White >Priority: Major > > I am posting this bug in hope to discover the stupid mistake I am doing > because I can't imagine a reasonable answer for the behavior I see right now > :-) > In short words, I do observe data loss in a keyspace called *snapshots* after > restarting the Cassandra service. Say I do have 1000 records in a table > called *snapshots.test_idx* then after restart the table has less entries or > is even empty. > My kind of "mysterious" observation is that it happens only in a keyspace > called *snapshots*... > h3. Steps to reproduce > These steps to reproduce show the described behavior in "most" attempts (not > every single time though). > {code} > # create keyspace > CREATE KEYSPACE snapshots WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > # create table > CREATE TABLE snapshots.test_idx (key text, seqno bigint, primary key(key)); > # insert some test data > INSERT INTO snapshots.test_idx (key,seqno) values ('key1', 1); > ... > INSERT INTO snapshots.test_idx (key,seqno) values ('key1000', 1000); > # count entries > SELECT count(*) FROM snapshots.test_idx; > 1000 > # restart service > kill > cassandra -f > # count entries > SELECT count(*) FROM snapshots.test_idx; > 0 > {code} > I hope someone can point me to the obvious mistake I am doing :-) > This happened to me using both Cassandra 3.9 and 3.11.0 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14155) [TRUNK] Gossiper somewhat frequently hitting an NPE on node startup with dtests at org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769)
[ https://issues.apache.org/jira/browse/CASSANDRA-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14155: - Component/s: Lifecycle > [TRUNK] Gossiper somewhat frequently hitting an NPE on node startup with > dtests at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > > > Key: CASSANDRA-14155 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14155 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle, Testing >Reporter: Michael Kjellman >Assignee: Jason Brown >Priority: Major > Labels: dtest > > Gossiper is somewhat frequently hitting an NPE on node startup with dtests at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > {code} > test teardown failure > Unexpected error found in node logs (see stdout for full details). Errors: > [ERROR [main] 2018-01-08 21:41:01,832 CassandraDaemon.java:675 - Exception > encountered during startup > java.lang.NullPointerException: null > at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:511) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:761) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:621) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:568) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:360) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:658) > [main/:na], ERROR [main] 2018-01-08 21:41:01,832 CassandraDaemon.java:675 - > Exception encountered during startup > java.lang.NullPointerException: null > at > org.apache.cassandra.gms.Gossiper.isSafeForStartup(Gossiper.java:769) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.checkForEndpointCollision(StorageService.java:511) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.prepareToJoin(StorageService.java:761) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:621) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.initServer(StorageService.java:568) > ~[main/:na] > at > org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:360) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569) > [main/:na] > at > org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:658) > [main/:na]] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14107) Dynamic key rotation support for transparent data encryption
[ https://issues.apache.org/jira/browse/CASSANDRA-14107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14107: - Component/s: Core > Dynamic key rotation support for transparent data encryption > > > Key: CASSANDRA-14107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14107 > Project: Cassandra > Issue Type: New Feature > Components: Core >Reporter: Stefan Podkowinski >Priority: Minor > Labels: encryption > Fix For: 4.x > > Attachments: patches-14107.tar > > > Handling of encryption keys as introduced in CASSANDRA-9945 takes place by > referencing a key alias in either cassandra.yaml, or the header of the > (commitlog/hints) file that has been encrypted. Using the alias as literal > value will work, but requires some attention when rotating keys. > Currently each time a key is rotated (i.e. adding a new key to the keystore > while preserving the previous version), the alias in cassandra.yaml has to be > update as well and the node needs to be restarted. It would be more > convenient to use a symbolic reference instead. My suggestion here would be > to use ":latest" for referring to the latest version. In this case > Cassandra always picks the key with the highest version in > ":". > The non-trivial part of this suggestion is how the "latest" key is referenced > in the file header. If we use "latest", e.g. for the commit log header, and > the key gets rotated, we'd now try do decrypt the file with the new key, > instead of the key it has been created with. Therefor we'd have to introduce > an extra step that will resolve the canonical version for "latest" and refer > to that one during any encrypt operation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14151) [TRUNK] TestRepair.test_dead_sync_initiator failed due to ERROR in logs "SSTableTidier ran with no existing data file for an sstable that was not new"
[ https://issues.apache.org/jira/browse/CASSANDRA-14151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14151: - Component/s: Testing > [TRUNK] TestRepair.test_dead_sync_initiator failed due to ERROR in logs > "SSTableTidier ran with no existing data file for an sstable that was not new" > -- > > Key: CASSANDRA-14151 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14151 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Michael Kjellman >Assignee: Marcus Eriksson >Priority: Major > Fix For: 3.11.x, 4.x > > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, > stdout-novnodes.txt > > > TestRepair.test_dead_sync_initiator failed due to finding the following > unexpected error in the node's logs: > {code} > ERROR [NonPeriodicTasks:1] 2018-01-06 03:38:50,229 LogTransaction.java:347 - > SSTableTidier ran with no existing data file for an sstable that was not new > {code} > If this is "okay/expected" behavior we should change the log level to > something different (which will fix the test) or if it's an actual bug use > this JIRA to fix it. I've attached all of the logs from all 3 instances from > the dtest run that hit this failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14162) Backport 7950
[ https://issues.apache.org/jira/browse/CASSANDRA-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14162: - Component/s: Tools > Backport 7950 > - > > Key: CASSANDRA-14162 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14162 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Kurt Greaves >Assignee: Kurt Greaves >Priority: Minor > Fix For: 3.0.x > > Attachments: 14162-3.0.patch, Screenshot from 2018-01-11 > 01-02-02.png, Screenshot from 2018-01-11 01-02-46.png, Screenshot from > 2018-01-11 01-02-51.png > > > Colleagues have had issues with output of listsnapshots/compactionstats > because of things with really long names. Mostly cosmetic but I see no reason > we shouldn't backport CASSANDRA-7950 to 3.0. It's practically a bugfix. I've > attached a patch and a bunch of images to show the relevant commands working > as intended after applying the patch. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14187) [DTEST] repair_tests/repair_test.py:TestRepair.simple_sequential_repair_test
[ https://issues.apache.org/jira/browse/CASSANDRA-14187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14187: - Component/s: Testing > [DTEST] repair_tests/repair_test.py:TestRepair.simple_sequential_repair_test > > > Key: CASSANDRA-14187 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14187 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson >Priority: Major > Labels: dtest > > Getting all rows from a node times out. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14291) Nodetool command to recreate SSTable components
[ https://issues.apache.org/jira/browse/CASSANDRA-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14291: - Component/s: Tools > Nodetool command to recreate SSTable components > --- > > Key: CASSANDRA-14291 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14291 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Kurt Greaves >Assignee: Alexander Ivakov >Priority: Minor > Labels: 4.0-feature-freeze-review-requested > > Need a JMX/Nodetool command to recreate components for SSTables without > re-writing the data files. > Possible implementation idea: > Create a {{nodetool (recreate|regen)component}} command that would enable you > to recreate specific components of an SSTable, and also allow specifying > SSTables or columnfamilies. > I'd say a flag for a list of components and a flag for SSTables with > keyspace.columnfamilies as positional arguments would work > Alternatively this could become part of upgradesstables, but would likely > make that command a bit bloated. > Background: > In CASSANDRA-11163 we changed it so summaries and bloomfilters were not > regenerated or persisted on startup. This means we would rely on > compactions/upgrades to regenerate the bloomfilter (or other components) > after a configuration change. While this works, it's pretty inefficient on > large tables just because you changed the bloomfilter size or summary chunk > sizes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14319) nodetool rebuild from DC lets you pass invalid datacenters
[ https://issues.apache.org/jira/browse/CASSANDRA-14319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14319: - Component/s: Tools > nodetool rebuild from DC lets you pass invalid datacenters > --- > > Key: CASSANDRA-14319 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14319 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Jon Haddad >Assignee: Vinay Chella >Priority: Major > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.11.x, 4.x > > Attachments: CASSANDRA-14319-trunk.txt > > > If you pass an invalid datacenter to nodetool rebuild, you'll get an error > like this: > {code} > Unable to find sufficient sources for streaming range > (3074457345618258602,-9223372036854775808] in keyspace system_distributed > {code} > Unfortunately, this is a rabbit hole of frustration if you are using caps for > your DC names and you pass in a lowercase DC name, or you just typo the DC. > Let's do the following: > # Check the DC name that's passed in against the list of DCs we know about > # If we don't find it, let's output a reasonable error, and list all the DCs > someone could put in. > # Ideally we indicate which keyspaces are set to replicate to this DC and > which aren't -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14670) Table Metrics Virtual Table
[ https://issues.apache.org/jira/browse/CASSANDRA-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14670: - Component/s: CQL > Table Metrics Virtual Table > --- > > Key: CASSANDRA-14670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14670 > Project: Cassandra > Issue Type: Improvement > Components: CQL, Observability >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Minor > Labels: pull-request-available, virtual-tables > Fix For: 4.0.x > > Time Spent: 10m > Remaining Estimate: 0h > > Different than CASSANDRA-14572 whose goal is to expose all metrics. This is > to expose a few hand tailored tables that are particularly useful in > debugging slow Cassandra instances (in my experience). These are useful in > finding out which table it is that is having issues if you see a node > performing poorly in general. This can kinda be figured out with cfstats > sorting and some clever bash-foo but its been a bit of a operational UX pain > for me personally for awhile. > examples: > {code} > cqlsh> select * from system_views.max_partition_size limit 5; > max_partition_size | keyspace_name | table_name > +---+ > 126934 |system | size_estimates >9887 | system_schema |columns >9887 | system_schema | tables >6866 |system | local > 258 | keyspace1 | standard1 > (5 rows) > cqlsh> select * from system_views.local_reads limit 5 ; > count | keyspace_name | table_name | 99th | max | median | > per_second > ---+---+-+---+---+-+ > 23 |system | local | 186563160 | 186563160 | 1629722 | > 3.56101 > 22 | system_schema | tables | 4055269 | 4055269 | 454826 | > 3.72452 > 14 | system_schema | columns | 1131752 | 1131752 | 545791 | > 2.37015 > 14 | system_schema | dropped_columns |126934 |126934 | 88148 | > 2.37015 > 14 | system_schema | indexes |219342 |219342 | 152321 | > 2.37015 > (5 rows) > cqlsh> select * from system_views.coordinator_reads limit 5; > count | keyspace_name | table_name | 99th | max | median | per_second > ---+---++--+-++ > 2 |system | local |0 | 0 | 0 | 0.005324 > 1 | system_auth | roles |0 | 0 | 0 | 0.002662 > 0 | basic | wide |0 | 0 | 0 | 0 > 0 | basic | wide3 |0 | 0 | 0 | 0 > 0 | keyspace1 | counter1 |0 | 0 | 0 | 0 > (5 rows) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14385) Fix Some Potential NPE
[ https://issues.apache.org/jira/browse/CASSANDRA-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14385: - Component/s: Materialized Views > Fix Some Potential NPE > --- > > Key: CASSANDRA-14385 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14385 > Project: Cassandra > Issue Type: Bug > Components: Materialized Views >Reporter: lujie >Priority: Major > Attachments: CA-14385_1.patch > > > We have developed a static analysis tool > [NPEDetector|https://github.com/lujiefsi/NPEDetector] to find some potential > NPE. Our analysis shows that some callees may return null in corner case(e.g. > node crash , IO exception), some of their callers have _!=null_ check but > some do not have. In this issue we post a patch which can add !=null based > on existed !=null check. For example: > Calle Schema#getView may return null: > {code:java} > public ViewMetadata getView(String keyspaceName, String viewName) > { > assert keyspaceName != null; > KeyspaceMetadata ksm = keyspaces.getNullable(keyspaceName); > return (ksm == null) ? null : ksm.views.getNullable(viewName);//may > return null > } > {code} > it have 4 callers, 3 of them have !=null check, like its caller > MigrationManager#announceViewDrop have !=null check() > {code:java} > public static void announceViewDrop(String ksName, String viewName, boolean > announceLocally) throws ConfigurationException > { >ViewMetadata view = Schema.instance.getView(ksName, viewName); > if (view == null)//null pointer checker > throw new ConfigurationException(String.format("Cannot drop non > existing materialized view '%s' in keyspace '%s'.", viewName, ksName)); >KeyspaceMetadata ksm = Schema.instance.getKeyspaceMetadata(ksName); >logger.info("Drop table '{}/{}'", view.keyspace, view.name); >announce(SchemaKeyspace.makeDropViewMutation(ksm, view, > FBUtilities.timestampMicros()), announceLocally); > } > {code} > but caller MigrationManager#announceMigration does not have > We add !=null check based on MigrationManager#announceViewDrop: > {code:java} > if (current == null) > throw new InvalidRequestException("There is no materialized view in > keyspace " + keyspace()); > {code} > But due to we are not very familiar with CASSANDRA, hope some expert can > review it. > Thanks > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14487) Unset GREP_OPTIONS
[ https://issues.apache.org/jira/browse/CASSANDRA-14487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14487: - Component/s: Packaging > Unset GREP_OPTIONS > -- > > Key: CASSANDRA-14487 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14487 > Project: Cassandra > Issue Type: Bug > Components: Packaging >Reporter: Joaquin Casares >Assignee: Joaquin Casares >Priority: Major > > I have always had GREP_OPTIONS set to \{{–color=always}}. > Recently, on OS X, this bit me here: > * > [https://github.com/apache/cassandra/blob/069e383f57e3106bbe2e6ddcebeae77da1ea53e1/conf/cassandra-env.sh#L132] > Because GREP_OPTIONS is also deprecated, it's suggested you use the following > format instead: > {NOFORMAT} > alias grep="grep --color=always" > {NOFORMAT} > We have two paths forward: > * {{unset GREP_OPTIONS}} > * Force the affected line to be {{grep --color=never}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14477) The check of num_tokens against the length of inital_token in the yaml triggers unexpectedly
[ https://issues.apache.org/jira/browse/CASSANDRA-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14477: - Component/s: Configuration > The check of num_tokens against the length of inital_token in the yaml > triggers unexpectedly > > > Key: CASSANDRA-14477 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14477 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Vincent White >Assignee: Vincent White >Priority: Minor > > In CASSANDRA-10120 we added a check that compares num_tokens against the > number of tokens supplied in the yaml via initial_token. From my reading of > CASSANDRA-10120 it was to prevent cassandra starting if the yaml contained > contradictory values for num_tokens and initial_tokens which should help > prevent misconfiguration via human error. The current behaviour appears to > differ slightly in that it performs this comparison regardless of whether > num_tokens is included in the yaml or not. Below are proposed patches to only > perform the check if both options are present in the yaml. > ||Branch|| > |[3.0.x|https://github.com/apache/cassandra/compare/cassandra-3.0...vincewhite:num_tokens_30]| > |[3.x|https://github.com/apache/cassandra/compare/cassandra-3.11...vincewhite:num_tokens_test_1_311]| -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14526) dtest to validate Cassandra state post failed/successful bootstrap
[ https://issues.apache.org/jira/browse/CASSANDRA-14526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14526: - Component/s: Testing > dtest to validate Cassandra state post failed/successful bootstrap > -- > > Key: CASSANDRA-14526 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14526 > Project: Cassandra > Issue Type: Sub-task > Components: Testing >Reporter: Jaydeepkumar Chovatia >Assignee: Jaydeepkumar Chovatia >Priority: Major > Labels: dtest > > Please find dtest here: > || dtest || > | [patch > |https://github.com/apache/cassandra-dtest/compare/master...jaydeepkumar1984:14526-trunk]| -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14559) Check for endpoint collision with hibernating nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-14559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14559: - Component/s: Distributed Metadata > Check for endpoint collision with hibernating nodes > > > Key: CASSANDRA-14559 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14559 > Project: Cassandra > Issue Type: Bug > Components: Distributed Metadata >Reporter: Vincent White >Assignee: Vincent White >Priority: Major > > I ran across an edge case when replacing a node with the same address. This > issue results in the node(and its tokens) being unsafely removed from gossip. > Steps to replicate: > 1. Create 3 node cluster. > 2. Stop a node > 3. Replace the stopped node with a node using the same address using the > replace_address flag > 4. Stop the node before it finishes bootstrapping > 5. Remove the replace_address flag and restart the node to resume > bootstrapping (if the data dir is also cleared at this point the node will > also generate new tokens when it starts) > 6. Stop the node before it finishes bootstrapping again > 7. 30 Seconds later the node will be removed from gossip because it now > matches the check for a FatClient > I think this is only an issue when replacing a node with the same address > because other replacements now use STATUS_BOOTSTRAPPING_REPLACE and leave the > dead node unchanged. > I believe the simplest fix for this is to add a check that prevents a > non-bootstrapped node (without the replaces_address flag) starting if there > is a gossip entry for the same address in the hibernate state. > [3.11 PoC > |https://github.com/apache/cassandra/compare/trunk...vincewhite:check_for_hibernate_on_start] > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14575) Reevaluate when to drop an internode connection on message error
[ https://issues.apache.org/jira/browse/CASSANDRA-14575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14575: - Component/s: Streaming and Messaging > Reevaluate when to drop an internode connection on message error > > > Key: CASSANDRA-14575 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14575 > Project: Cassandra > Issue Type: Improvement > Components: Streaming and Messaging >Reporter: Jason Brown >Assignee: Jason Brown >Priority: Minor > Fix For: 4.0 > > > As mentioned in CASSANDRA-14574, explore if and when we can safely ignore an > incoming internode message on certain classes of failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14589) CommitLogReplayer.handleReplayError swallows stack traces
[ https://issues.apache.org/jira/browse/CASSANDRA-14589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14589: - Component/s: Observability > CommitLogReplayer.handleReplayError swallows stack traces > -- > > Key: CASSANDRA-14589 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14589 > Project: Cassandra > Issue Type: Bug > Components: Observability >Reporter: Benedict >Assignee: Benedict >Priority: Minor > Fix For: 3.0.x > > > handleReplayError does not accept an explicit Throwable parameter, so callers > only integrate the exception’s message text into the log entry. This means a > loss of debug information for operators. > Note, this was fixed by CASSANDRA-8844 for 3.x+, only 3.0.x is affected. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14588) Unfiltered.isEmpty conflicts with Row extends AbstractCollection.isEmpty
[ https://issues.apache.org/jira/browse/CASSANDRA-14588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14588: - Component/s: Local Write-Read Paths > Unfiltered.isEmpty conflicts with Row extends AbstractCollection.isEmpty > > > Key: CASSANDRA-14588 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14588 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths >Reporter: Benedict >Assignee: Benedict >Priority: Minor > Fix For: 4.0, 3.0.x, 3.11.x > > > The isEmpty() method’s definition for a Row is incompatible with that for a > Collection. The former can return false even if there is no ColumnData for > the row (i.e. the collection is of size 0). > > This currently, by chance, doesn’t cause us any problems. But if we ever > pass a Row as a Collection to a method that invokes isEmpty() and then > expects (for correctness) that the _collection_ portion is not empty, it will > fail. > > We should probably have an asCollection() method to obtain a collection from > a Row, and not implement Collection directly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14591) Throw exception if Columns serialized subset encode more columns than possible
[ https://issues.apache.org/jira/browse/CASSANDRA-14591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14591: - Component/s: Local Write-Read Paths > Throw exception if Columns serialized subset encode more columns than possible > -- > > Key: CASSANDRA-14591 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14591 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths >Reporter: Benedict >Assignee: Benedict >Priority: Minor > Fix For: 4.0, 3.0.x, 3.11.x > > > When deserializing a \{{Columns}} subset via bitset membership, it is trivial > to add a modest probability of detecting corruption, by simply testing that > there are no higher bits set than the candidate \{{Columns}} permits. This > would help mitigate secondary problems arising from issues like > CASSANDRA-14568. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14613) ant generate-idea-files / generate-eclipse-files needs update after CASSANDRA-9608
[ https://issues.apache.org/jira/browse/CASSANDRA-14613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14613: - Component/s: Build > ant generate-idea-files / generate-eclipse-files needs update after > CASSANDRA-9608 > -- > > Key: CASSANDRA-14613 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14613 > Project: Cassandra > Issue Type: Bug > Components: Build >Reporter: Marcus Eriksson >Assignee: Robert Stupp >Priority: Major > Fix For: 4.x > > > {{ide/idea-iml-file.xml}} looks hard coded to include {{src/java11}} when > creating the project, this should probably detect what version we are > building for instead > cc [~snazy] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14629) Abstract Virtual Table for very large result sets
[ https://issues.apache.org/jira/browse/CASSANDRA-14629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14629: - Component/s: Observability CQL > Abstract Virtual Table for very large result sets > - > > Key: CASSANDRA-14629 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14629 > Project: Cassandra > Issue Type: New Feature > Components: CQL, Observability >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Minor > Labels: pull-request-available, virtual-tables > Time Spent: 10m > Remaining Estimate: 0h > > For virtual tables that are very large we cannot use existing > abstractvirtualtable since it would OOM the node possibly. An example would > be a table to view the internal cache contents or to view contents of > sstables. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14655) Upgrade C* to use latest guava (27.0)
[ https://issues.apache.org/jira/browse/CASSANDRA-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14655: - Labels: 4.0-feature-freeze-review-requested (was: ) > Upgrade C* to use latest guava (27.0) > - > > Key: CASSANDRA-14655 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14655 > Project: Cassandra > Issue Type: Improvement > Components: Libraries >Reporter: Sumanth Pasupuleti >Assignee: Sumanth Pasupuleti >Priority: Minor > Labels: 4.0-feature-freeze-review-requested > Fix For: 4.x > > > C* currently uses guava 23.3. This JIRA is about changing C* to use latest > guava (26.0). Originated from a discussion in the mailing list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14670) Table Metrics Virtual Table
[ https://issues.apache.org/jira/browse/CASSANDRA-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14670: - Component/s: Observability > Table Metrics Virtual Table > --- > > Key: CASSANDRA-14670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14670 > Project: Cassandra > Issue Type: Improvement > Components: CQL, Observability >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Minor > Labels: pull-request-available, virtual-tables > Fix For: 4.0.x > > Time Spent: 10m > Remaining Estimate: 0h > > Different than CASSANDRA-14572 whose goal is to expose all metrics. This is > to expose a few hand tailored tables that are particularly useful in > debugging slow Cassandra instances (in my experience). These are useful in > finding out which table it is that is having issues if you see a node > performing poorly in general. This can kinda be figured out with cfstats > sorting and some clever bash-foo but its been a bit of a operational UX pain > for me personally for awhile. > examples: > {code} > cqlsh> select * from system_views.max_partition_size limit 5; > max_partition_size | keyspace_name | table_name > +---+ > 126934 |system | size_estimates >9887 | system_schema |columns >9887 | system_schema | tables >6866 |system | local > 258 | keyspace1 | standard1 > (5 rows) > cqlsh> select * from system_views.local_reads limit 5 ; > count | keyspace_name | table_name | 99th | max | median | > per_second > ---+---+-+---+---+-+ > 23 |system | local | 186563160 | 186563160 | 1629722 | > 3.56101 > 22 | system_schema | tables | 4055269 | 4055269 | 454826 | > 3.72452 > 14 | system_schema | columns | 1131752 | 1131752 | 545791 | > 2.37015 > 14 | system_schema | dropped_columns |126934 |126934 | 88148 | > 2.37015 > 14 | system_schema | indexes |219342 |219342 | 152321 | > 2.37015 > (5 rows) > cqlsh> select * from system_views.coordinator_reads limit 5; > count | keyspace_name | table_name | 99th | max | median | per_second > ---+---++--+-++ > 2 |system | local |0 | 0 | 0 | 0.005324 > 1 | system_auth | roles |0 | 0 | 0 | 0.002662 > 0 | basic | wide |0 | 0 | 0 | 0 > 0 | basic | wide3 |0 | 0 | 0 | 0 > 0 | keyspace1 | counter1 |0 | 0 | 0 | 0 > (5 rows) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14670) Table Metrics Virtual Table
[ https://issues.apache.org/jira/browse/CASSANDRA-14670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14670: - Fix Version/s: 4.0.x > Table Metrics Virtual Table > --- > > Key: CASSANDRA-14670 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14670 > Project: Cassandra > Issue Type: Improvement >Reporter: Chris Lohfink >Assignee: Chris Lohfink >Priority: Minor > Labels: pull-request-available, virtual-tables > Fix For: 4.0.x > > Time Spent: 10m > Remaining Estimate: 0h > > Different than CASSANDRA-14572 whose goal is to expose all metrics. This is > to expose a few hand tailored tables that are particularly useful in > debugging slow Cassandra instances (in my experience). These are useful in > finding out which table it is that is having issues if you see a node > performing poorly in general. This can kinda be figured out with cfstats > sorting and some clever bash-foo but its been a bit of a operational UX pain > for me personally for awhile. > examples: > {code} > cqlsh> select * from system_views.max_partition_size limit 5; > max_partition_size | keyspace_name | table_name > +---+ > 126934 |system | size_estimates >9887 | system_schema |columns >9887 | system_schema | tables >6866 |system | local > 258 | keyspace1 | standard1 > (5 rows) > cqlsh> select * from system_views.local_reads limit 5 ; > count | keyspace_name | table_name | 99th | max | median | > per_second > ---+---+-+---+---+-+ > 23 |system | local | 186563160 | 186563160 | 1629722 | > 3.56101 > 22 | system_schema | tables | 4055269 | 4055269 | 454826 | > 3.72452 > 14 | system_schema | columns | 1131752 | 1131752 | 545791 | > 2.37015 > 14 | system_schema | dropped_columns |126934 |126934 | 88148 | > 2.37015 > 14 | system_schema | indexes |219342 |219342 | 152321 | > 2.37015 > (5 rows) > cqlsh> select * from system_views.coordinator_reads limit 5; > count | keyspace_name | table_name | 99th | max | median | per_second > ---+---++--+-++ > 2 |system | local |0 | 0 | 0 | 0.005324 > 1 | system_auth | roles |0 | 0 | 0 | 0.002662 > 0 | basic | wide |0 | 0 | 0 | 0 > 0 | basic | wide3 |0 | 0 | 0 | 0 > 0 | keyspace1 | counter1 |0 | 0 | 0 | 0 > (5 rows) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14727) Transient Replication: EACH_QUORUM not implemented
[ https://issues.apache.org/jira/browse/CASSANDRA-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14727: - Component/s: Coordination > Transient Replication: EACH_QUORUM not implemented > -- > > Key: CASSANDRA-14727 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14727 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Benedict >Assignee: Benedict >Priority: Major > Fix For: 4.0 > > > Transient replication cannot presently handle EACH_QUORUM consistency; reads > and writes should currently fail, though without good error messages. Not > clear if this is acceptable for GA, since we cannot impose this limitation at > Keyspace declaration time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14716) Protocol frame checksumming options should not be case sensitive
[ https://issues.apache.org/jira/browse/CASSANDRA-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14716: - Component/s: CQL > Protocol frame checksumming options should not be case sensitive > > > Key: CASSANDRA-14716 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14716 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sam Tunnicliffe >Assignee: Sam Tunnicliffe >Priority: Major > Fix For: 4.0 > > > Protocol v5 adds support for checksumming of native protocol frame bodies. > The checksum type is negotiated per-connection via the \{{STARTUP}} message, > with two types currently supported, Adler32 and CRC32. The mapping of the > startup option value requested by the client to a \{{ChecksumType}} should > not be case sensitive, but currently it is. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org
[jira] [Updated] (CASSANDRA-14869) Range.subtractContained produces incorrect results when used on full ring
[ https://issues.apache.org/jira/browse/CASSANDRA-14869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] C. Scott Andreas updated CASSANDRA-14869: - Reviewers: Alex Petrov > Range.subtractContained produces incorrect results when used on full ring > - > > Key: CASSANDRA-14869 > URL: https://issues.apache.org/jira/browse/CASSANDRA-14869 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Aleksandr Sorokoumov >Assignee: Aleksandr Sorokoumov >Priority: Major > Fix For: 3.0.x, 3.11.x, 4.0.x > > Attachments: range bug.jpg > > > Currently {{Range.subtractContained}} returns incorrect results if minuend > range covers full ring and: > * subtrahend range wraps around. For example, {{(50, 50] - (10, 100]}} > returns {{\{(50,10], (100,50]\}}} instead of {{(100,10]}} > * subtrahend range covers the full ring as well. For example {{(50, 50] - (0, > 0]}} returns {{\{(0,50], (50,0]\}}} instead of {{\{\}}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org For additional commands, e-mail: commits-h...@cassandra.apache.org