[jira] [Updated] (CASSANDRA-12202) LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-12202: Status: Open (was: Patch Available) yeah, I'll have a look > LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0 > --- > > Key: CASSANDRA-12202 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12202 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.x, 2.2.x, 3.0.x > > > We actually fixed this for 3.7+ in CASSANDRA-11657, need to backport that fix > to 2.1+ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7384) Collect metrics on queries by consistency level
[ https://issues.apache.org/jira/browse/CASSANDRA-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1537#comment-1537 ] Robert Stupp commented on CASSANDRA-7384: - Also kicked off CI tests: ||cassandra-3.0|[branch|https://github.com/apache/cassandra/compare/cassandra-3.0...snazy:7384-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-7384-3.0-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-7384-3.0-dtest/lastSuccessfulBuild/] ||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:7384-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-7384-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-7384-trunk-dtest/lastSuccessfulBuild/] > Collect metrics on queries by consistency level > --- > > Key: CASSANDRA-7384 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7384 > Project: Cassandra > Issue Type: Improvement >Reporter: Vishy Kasar >Assignee: sankalp kohli >Priority: Minor > Fix For: 3.x > > Attachments: CASSANDRA-7384_3.0_v2.txt > > > We had cases where cassandra client users thought that they were doing > queries at one consistency level but turned out to be not correct. It will be > good to collect metrics on number of queries done at various consistency > level on the server. See the equivalent JIRA on java driver: > https://datastax-oss.atlassian.net/browse/JAVA-354 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-7384) Collect metrics on queries by consistency level
[ https://issues.apache.org/jira/browse/CASSANDRA-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378877#comment-15378877 ] Robert Stupp edited comment on CASSANDRA-7384 at 7/15/16 5:19 AM: -- Generally LGTM - just a one comment: * we use {{.}} as a separator throughout the metrics. Would be nicer to use a {{.}} instead of {{\-}} for the metric names (in the constructor initializing the metric maps - e.g. {{new ClientRequestMetrics("Read-" + level.name())}}) If you're fine, I can fix that on commit. was (Author: snazy): Generally LGTM - just a one comment: * we use {{.}} as a separator throughout the metrics. Would be nicer to use a {{.}} instead of {{-}} for the metric names (in the constructor initializing the metric maps - e.g. {{new ClientRequestMetrics("Read-" + level.name())}}) If you're fine, I can fix that on commit. > Collect metrics on queries by consistency level > --- > > Key: CASSANDRA-7384 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7384 > Project: Cassandra > Issue Type: Improvement >Reporter: Vishy Kasar >Assignee: sankalp kohli >Priority: Minor > Fix For: 3.x > > Attachments: CASSANDRA-7384_3.0_v2.txt > > > We had cases where cassandra client users thought that they were doing > queries at one consistency level but turned out to be not correct. It will be > good to collect metrics on number of queries done at various consistency > level on the server. See the equivalent JIRA on java driver: > https://datastax-oss.atlassian.net/browse/JAVA-354 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7384) Collect metrics on queries by consistency level
[ https://issues.apache.org/jira/browse/CASSANDRA-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7384: Status: Open (was: Patch Available) Generally LGTM - just a one comment: * we use {{.}} as a separator throughout the metrics. Would be nicer to use a {{.}} instead of {{-}} for the metric names (in the constructor initializing the metric maps - e.g. {{new ClientRequestMetrics("Read-" + level.name())}}) If you're fine, I can fix that on commit. > Collect metrics on queries by consistency level > --- > > Key: CASSANDRA-7384 > URL: https://issues.apache.org/jira/browse/CASSANDRA-7384 > Project: Cassandra > Issue Type: Improvement >Reporter: Vishy Kasar >Assignee: sankalp kohli >Priority: Minor > Fix For: 3.x > > Attachments: CASSANDRA-7384_3.0_v2.txt > > > We had cases where cassandra client users thought that they were doing > queries at one consistency level but turned out to be not correct. It will be > good to collect metrics on number of queries done at various consistency > level on the server. See the equivalent JIRA on java driver: > https://datastax-oss.atlassian.net/browse/JAVA-354 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12211) Avoid giving every command the same latency number
Robert Stupp created CASSANDRA-12211: Summary: Avoid giving every command the same latency number Key: CASSANDRA-12211 URL: https://issues.apache.org/jira/browse/CASSANDRA-12211 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Priority: Minor While reviewing CASSANDRA-7384, I found a _TODO avoid giving every command the same latency number. Can fix this in CASSANDRA-5329_ [here|https://github.com/apache/cassandra/blob/70059726f08a98ea21af91ce3855bf62f6f4b652/src/java/org/apache/cassandra/service/StorageProxy.java#L1631] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11031) MultiTenant : support “ALLOW FILTERING" for Partition Key
[ https://issues.apache.org/jira/browse/CASSANDRA-11031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378862#comment-15378862 ] ZhaoYang commented on CASSANDRA-11031: -- [~ifesdjeen] thanks for reviewing. Is it necessary to support filtering with IN condition? or just EQ/GT/LT.. I will add more unit test. > MultiTenant : support “ALLOW FILTERING" for Partition Key > - > > Key: CASSANDRA-11031 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11031 > Project: Cassandra > Issue Type: New Feature > Components: CQL >Reporter: ZhaoYang >Assignee: ZhaoYang >Priority: Minor > Fix For: 3.x > > Attachments: CASSANDRA-11031-3.7.patch > > > Currently, Allow Filtering only works for secondary Index column or > clustering columns. And it's slow, because Cassandra will read all data from > SSTABLE from hard-disk to memory to filter. > But we can support allow filtering on Partition Key, as far as I know, > Partition Key is in memory, so we can easily filter them, and then read > required data from SSTable. > This will similar to "Select * from table" which scan through entire cluster. > CREATE TABLE multi_tenant_table ( > tenant_id text, > pk2 text, > c1 text, > c2 text, > v1 text, > v2 text, > PRIMARY KEY ((tenant_id,pk2),c1,c2) > ) ; > Select * from multi_tenant_table where tenant_id = "datastax" allow filtering; -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378857#comment-15378857 ] Stefania commented on CASSANDRA-11465: -- Test results are very disappointing, I reproduced [the issue|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-cqlsh-dtest/lastCompletedBuild/testReport/cql_tracing_test/TestCqlTracing/tracing_simple_test/] despite of the patch. Looking into it. BTW, the other 3 failing cqlsh tests are expected, there is a companion dtest PR that will be committed with CASSANDRA-11850 and will fix those failures. > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-11374) LEAK DETECTED during repair
[ https://issues.apache.org/jira/browse/CASSANDRA-11374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-11374. Resolution: Cannot Reproduce Assignee: (was: Marcus Eriksson) Please reopen if you can reproduce. > LEAK DETECTED during repair > --- > > Key: CASSANDRA-11374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11374 > Project: Cassandra > Issue Type: Bug >Reporter: Jean-Francois Gosselin > Attachments: Leak_Logs_1.zip, Leak_Logs_2.zip > > > When running a range repair we are seeing the following LEAK DETECTED errors: > {noformat} > ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,261 Ref.java:179 - LEAK > DETECTED: a reference > (org.apache.cassandra.utils.concurrent.Ref$State@5ee90b43) to class > org.apache.cassandra.utils.concurrent.WrappedSharedCloseable$1@367168611:[[OffHeapBitSet]] > was not released before the reference was garbage collected > ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,262 Ref.java:179 - LEAK > DETECTED: a reference > (org.apache.cassandra.utils.concurrent.Ref$State@4ea9d4a7) to class > org.apache.cassandra.io.util.SafeMemory$MemoryTidy@1875396681:Memory@[7f34b905fd10..7f34b9060b7a) > was not released before the reference was garbage collected > ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,262 Ref.java:179 - LEAK > DETECTED: a reference > (org.apache.cassandra.utils.concurrent.Ref$State@27a6b614) to class > org.apache.cassandra.io.util.SafeMemory$MemoryTidy@838594402:Memory@[7f34bae11ce0..7f34bae11d84) > was not released before the reference was garbage collected > ERROR [Reference-Reaper:1] 2016-03-17 06:58:52,263 Ref.java:179 - LEAK > DETECTED: a reference > (org.apache.cassandra.utils.concurrent.Ref$State@64e7b566) to class > org.apache.cassandra.io.util.SafeMemory$MemoryTidy@674656075:Memory@[7f342deab4e0..7f342deb7ce0) > was not released before the reference was garbage collected > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10520) Compressed writer and reader should support non-compressed data.
[ https://issues.apache.org/jira/browse/CASSANDRA-10520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-10520: - Status: Open (was: Patch Available) Just set status to "open" as we're still a few days away from 4.0. Feel free to set to PA as soon as 4.0 is in sight. > Compressed writer and reader should support non-compressed data. > > > Key: CASSANDRA-10520 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10520 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths >Reporter: Branimir Lambov >Assignee: Branimir Lambov > Labels: messaging-service-bump-required > Fix For: 4.x > > > Compressing uncompressible data, as done, for instance, to write SSTables > during stress-tests, results in chunks larger than 64k which are a problem > for the buffer pooling mechanisms employed by the > {{CompressedRandomAccessReader}}. This results in non-negligible performance > issues due to excessive memory allocation. > To solve this problem and avoid decompression delays in the cases where it > does not provide benefits, I think we should allow compressed files to store > uncompressed chunks as alternative to compressed data. Such a chunk could be > written after compression returns a buffer larger than, for example, 90% of > the input, and would not result in additional delays in writing. On reads it > could be recognized by size (using a single global threshold constant in the > compression metadata) and data could be directly transferred into the > decompressed buffer, skipping the decompression step and ensuring a 64k > buffer for compressed data always suffices. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12181) Include table name in "Cannot get comparator" exception
[ https://issues.apache.org/jira/browse/CASSANDRA-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378818#comment-15378818 ] Robert Stupp commented on CASSANDRA-12181: -- Would it be ok to just catch {{RuntimeException}} and wrap that one instead of a new {{InvalidCustomTypeException}}? So like this: {code} catch (RuntimeException e) { throw new RuntimeException(e.getMessage() + " This might due to a mismatch between the schema and the data read for ksName: " + keyspace.getName() + ", cfName: " + name, e); } {code} > Include table name in "Cannot get comparator" exception > --- > > Key: CASSANDRA-12181 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12181 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: sankalp kohli >Priority: Trivial > Attachments: CASSANDRA-12181_3.0.txt > > > Having table name will help in debugging the following exception. > ERROR [MutationStage:xx] CassandraDaemon.java (line 199) Exception in thread > Thread[MutationStage:3788,5,main] > clusterName=itms8shared20 > java.lang.RuntimeException: Cannot get comparator 2 in > org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type). > > This might be due to a mismatch between the schema and the data read -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12181) Include table name in "Cannot get comparator" exception
[ https://issues.apache.org/jira/browse/CASSANDRA-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-12181: - Status: Open (was: Patch Available) > Include table name in "Cannot get comparator" exception > --- > > Key: CASSANDRA-12181 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12181 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: sankalp kohli >Priority: Trivial > Attachments: CASSANDRA-12181_3.0.txt > > > Having table name will help in debugging the following exception. > ERROR [MutationStage:xx] CassandraDaemon.java (line 199) Exception in thread > Thread[MutationStage:3788,5,main] > clusterName=itms8shared20 > java.lang.RuntimeException: Cannot get comparator 2 in > org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type). > > This might be due to a mismatch between the schema and the data read -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator
[ https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378784#comment-15378784 ] Stefania commented on CASSANDRA-9318: - I don't have much to add regarding the merits of one approach vs. the other, other than to say that I agree with [~slebresne] that we should implement the strategy API so that both strategies can be supported, and this will make it more likely that the API is fit for even more strategies. I would even go one step further and suggest that the second strategy should be relatively easy to implement if the framework is in place and if we can work out a reasonable threshold. Therefore we could consider implementing and testing both, either as part of a follow up ticket or this one. Another thing I would like to point out is that, once we make read and write requests fully non-blocking, either via CASSANDRA-10993 or CASSANDRA-10528, we will probably have to rethink this. > Bound the number of in-flight requests at the coordinator > - > > Key: CASSANDRA-9318 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9318 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths, Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Sergio Bossa > Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, > limit.btm, no_backpressure.png > > > It's possible to somewhat bound the amount of load accepted into the cluster > by bounding the number of in-flight requests and request bytes. > An implementation might do something like track the number of outstanding > bytes and requests and if it reaches a high watermark disable read on client > connections until it goes back below some low watermark. > Need to make sure that disabling read on the client connection won't > introduce other issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378760#comment-15378760 ] Stefania commented on CASSANDRA-11465: -- Since Paulo is back on Monday and it is already Friday, I prefer to leave the test code as it is. I've prepared a patch based on 11850, and launched the tests: ||3.9||trunk|| |[patch|https://github.com/stef1927/cassandra/commits/11465-cqlsh-3.9]|[patch|https://github.com/stef1927/cassandra/commits/11465-cqlsh]| |[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-cqlsh-3.9-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-cqlsh-dtest/]| |[cqlsh dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-cqlsh-3.9-cqlsh-tests/]|[cqlsh dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-cqlsh-cqlsh-tests/]| If the tests are OK, could we arrange a multiplexed job? I think 20x of {{cql_tracing_test.py:TestCqlTracing}} should be sufficient. bq. FWIW, I spent a little time trying to make tracing more reliable for tests in CASSANDRA-11928 by doing synchronous CL.ALL writes when a system flag was present. Unfortunately, this appeared to cause some kind of deadlock, and it didn't seem worth it to investigate further. However, if this is a problem across many tests, we may want to spend more time looking into that. Let's see if we have more luck with the driver reading at CL.ALL. > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11850) cannot use cql since upgrading python to 2.7.11+
[ https://issues.apache.org/jira/browse/CASSANDRA-11850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378744#comment-15378744 ] Stefania commented on CASSANDRA-11850: -- [~pauloricardomg], welcome back! Here is a recap to help you catch up more quickly: * The 2.1 patch is just a driver upgrade to a custom version that only fixes the problem with python 2.7.12 (which is trivial in itself) * The 2.2 patch merges cleanly to 3.0 and there is a trivial conflict to 3.9. copyutil.py also contains the fix for CASSANDRA-11979. * The 3.9 patch contains an additional commit for CDC changes * The 3.9 patch merges cleanly to trunk * There are also dtest changes [here|https://github.com/stef1927/cassandra-dtest/tree/11850], of which the most notable changes are for CDC in 3.9+ and in cqlsh_tests.py {{test_refresh_schema_on_timeout_error}}, where I removed not only the operation timed out exception but also the warning on schema mismatch, since it is not true that DOWN nodes will report a schema mismatch, see {{_get_schema_mismatches}} in _cassandra/cluster.py_. In view of this we should perhaps also change the warning message in cqlsh {{refresh_schema_metadata_best_effort}} and close CASSANDRA-11999 once this is committed. ||2.1||2.2||3.0||3.9||trunk|| |[patch|https://github.com/stef1927/cassandra/commits/11850-cqlsh-2.1]|[patch|https://github.com/stef1927/cassandra/commits/11850-cqlsh-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11850-cqlsh-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11850-cqlsh-3.9]|[patch|https://github.com/stef1927/cassandra/commits/11850-cqlsh]| |[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11850-cqlsh-2.1-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11850-cqlsh-2.2-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11850-cqlsh-3.0-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11850-cqlsh-3.9-cqlsh-tests/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11850-cqlsh-cqlsh-tests/]| > cannot use cql since upgrading python to 2.7.11+ > > > Key: CASSANDRA-11850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11850 > Project: Cassandra > Issue Type: Bug > Components: CQL > Environment: Development >Reporter: Andrew Madison >Assignee: Stefania > Labels: cqlsh > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > > OS: Debian GNU/Linux stretch/sid > Kernel: 4.5.0-2-amd64 #1 SMP Debian 4.5.4-1 (2016-05-16) x86_64 GNU/Linux > Python version: 2.7.11+ (default, May 9 2016, 15:54:33) > [GCC 5.3.1 20160429] > cqlsh --version: cqlsh 5.0.1 > cassandra -v: 3.5 (also occurs with 3.0.6) > Issue: > when running cqlsh, it returns the following error: > cqlsh -u dbarpt_usr01 > Password: * > Connection error: ('Unable to connect to any servers', {'odbasandbox1': > TypeError('ref() does not take keyword arguments',)}) > I cleared PYTHONPATH: > python -c "import json; print dir(json); print json.__version__" > ['JSONDecoder', 'JSONEncoder', '__all__', '__author__', '__builtins__', > '__doc__', '__file__', '__name__', '__package__', '__path__', '__version__', > '_default_decoder', '_default_encoder', 'decoder', 'dump', 'dumps', > 'encoder', 'load', 'loads', 'scanner'] > 2.0.9 > Java based clients can connect to Cassandra with no issue. Just CQLSH and > Python clients cannot. > nodetool status also works. > Thank you for your help. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10884) test_refresh_schema_on_timeout_error dtest flapping on CassCI
[ https://issues.apache.org/jira/browse/CASSANDRA-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378735#comment-15378735 ] Stefania commented on CASSANDRA-10884: -- Repeat the tests after CASSANDRA-11850 is committed, then you can close this. > test_refresh_schema_on_timeout_error dtest flapping on CassCI > - > > Key: CASSANDRA-10884 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10884 > Project: Cassandra > Issue Type: Sub-task >Reporter: Jim Witschey >Assignee: DS Test Eng > Labels: dtest > Fix For: 3.0.x > > > These tests create keyspaces and tables through cqlsh, then runs {{DESCRIBE}} > to confirm they were successfully created. These tests flap under the novnode > dtest runs: > http://cassci.datastax.com/job/cassandra-2.1_novnode_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/ > http://cassci.datastax.com/job/cassandra-2.2_novnode_dtest/lastCompletedBuild/testReport/cqlsh_tests.cqlsh_tests/TestCqlsh/test_refresh_schema_on_timeout_error/history/ > I have not reproduced this locally on Linux. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-12210) Add metrics to track size of reads and writes by customer requests
[ https://issues.apache.org/jira/browse/CASSANDRA-12210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp resolved CASSANDRA-12210. -- Resolution: Duplicate > Add metrics to track size of reads and writes by customer requests > -- > > Key: CASSANDRA-12210 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12210 > Project: Cassandra > Issue Type: Improvement >Reporter: Nachiket Patil >Priority: Minor > > We have metrics to track number of customer requests but not the size of > them. These metrics help monitoring and recognizing usage patterns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12210) Add metrics to track size of reads and writes by customer requests
Nachiket Patil created CASSANDRA-12210: -- Summary: Add metrics to track size of reads and writes by customer requests Key: CASSANDRA-12210 URL: https://issues.apache.org/jira/browse/CASSANDRA-12210 Project: Cassandra Issue Type: Improvement Reporter: Nachiket Patil Priority: Minor We have metrics to track number of customer requests but not the size of them. These metrics help monitoring and recognizing usage patterns. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12209) Make Level compaction default
[ https://issues.apache.org/jira/browse/CASSANDRA-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378589#comment-15378589 ] Brandon Williams commented on CASSANDRA-12209: -- Not sure how I feel about this, STCS is still a better universal fit when the workload is unknown, while conversely LCS can be extremely bad in many cases. > Make Level compaction default > - > > Key: CASSANDRA-12209 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12209 > Project: Cassandra > Issue Type: Wish >Reporter: sankalp kohli >Priority: Minor > > Level Compaction has come a long way since it was added. I think it is time > to make it default. We can debate which version we can do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12209) Make Level compaction default
[ https://issues.apache.org/jira/browse/CASSANDRA-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378585#comment-15378585 ] sankalp kohli commented on CASSANDRA-12209: --- cc [~brandon.williams] and [~krummas] > Make Level compaction default > - > > Key: CASSANDRA-12209 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12209 > Project: Cassandra > Issue Type: Wish >Reporter: sankalp kohli >Priority: Minor > > Level Compaction has come a long way since it was added. I think it is time > to make it default. We can debate which version we can do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12209) Make Level compaction default
sankalp kohli created CASSANDRA-12209: - Summary: Make Level compaction default Key: CASSANDRA-12209 URL: https://issues.apache.org/jira/browse/CASSANDRA-12209 Project: Cassandra Issue Type: Wish Reporter: sankalp kohli Priority: Minor Level Compaction has come a long way since it was added. I think it is time to make it default. We can debate which version we can do this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12208) Estimated droppable tombstones given by sstablemetadata counts tombstones that aren't actually "droppable"
[ https://issues.apache.org/jira/browse/CASSANDRA-12208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-12208: - Assignee: Marcus Eriksson > Estimated droppable tombstones given by sstablemetadata counts tombstones > that aren't actually "droppable" > -- > > Key: CASSANDRA-12208 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12208 > Project: Cassandra > Issue Type: Bug >Reporter: Thanh >Assignee: Marcus Eriksson >Priority: Minor > > => "Estimated droppable tombstones" given by *sstablemetadata* counts > tombstones that aren't actually "droppable" > To be clear, the "Estimated droppable tombstones" calculation counts > tombstones that have not yet passed gc_grace_seconds as droppable tombstones, > which is unexpected, since such tombstones aren't droppable. > To observe the problem: > Create a table using the default gc_grace_seconds (default gc_grace_seconds > is 86400 is 1 day). > Populate the table with a couple of records. > Do a delete. > Do a "nodetool flush" to flush the memtable to disk. > Do an "sstablemetadata " to get the metadata of the sstable you just > created by doing the flush, and observe that the Estimated droppable > tombstones is greater than 0.0 (actual value depends on the total number > inserts/updates/deletes that you did before triggered the flush) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12208) Estimated droppable tombstones given by sstablemetadata counts tombstones that aren't actually "droppable"
Thanh created CASSANDRA-12208: - Summary: Estimated droppable tombstones given by sstablemetadata counts tombstones that aren't actually "droppable" Key: CASSANDRA-12208 URL: https://issues.apache.org/jira/browse/CASSANDRA-12208 Project: Cassandra Issue Type: Bug Reporter: Thanh Priority: Minor => "Estimated droppable tombstones" given by *sstablemetadata* counts tombstones that aren't actually "droppable" To be clear, the "Estimated droppable tombstones" calculation counts tombstones that have not yet passed gc_grace_seconds as droppable tombstones, which is unexpected, since such tombstones aren't droppable. To observe the problem: Create a table using the default gc_grace_seconds (default gc_grace_seconds is 86400 is 1 day). Populate the table with a couple of records. Do a delete. Do a "nodetool flush" to flush the memtable to disk. Do an "sstablemetadata " to get the metadata of the sstable you just created by doing the flush, and observe that the Estimated droppable tombstones is greater than 0.0 (actual value depends on the total number inserts/updates/deletes that you did before triggered the flush) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12107) Fix range scans for table with live static rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378542#comment-15378542 ] Sharvanath Pathak commented on CASSANDRA-12107: --- [~blerer] Thanks > Fix range scans for table with live static rows > --- > > Key: CASSANDRA-12107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12107 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sharvanath Pathak > Fix For: 3.0.9, 3.9 > > Attachments: 12107-3.0.txt, repro > > > We were seeing some weird behaviour with limit based scan queries. In > particular, we see the following: > {noformat} > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > 6b470c3e43ee06d1 | -9218823070349964862 > 484b091ca97803cd | -8954822859271125729 > (2 rows) > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > c348aaec2f1e4b85 | -9218781105444826588 > {noformat} > In the table uuid is partition key, and it has a clustering key as well. > So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. > After some investigation, it seems to me like the issue is in the way > DataLimits handles static rows. Here is a patch for trunk > (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc) > which seems to fix it for me. Please take a look, seems like a pretty > critical issue to me. > I have forked the dtests for it as well. However, since trunk has some > failures already, I'm not fully sure how to infer the results. > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/ > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12107) Fix range scans for table with live static rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378542#comment-15378542 ] Sharvanath Pathak edited comment on CASSANDRA-12107 at 7/14/16 10:52 PM: - Thanks [~blerer] was (Author: sharvanath): [~blerer] Thanks > Fix range scans for table with live static rows > --- > > Key: CASSANDRA-12107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12107 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sharvanath Pathak > Fix For: 3.0.9, 3.9 > > Attachments: 12107-3.0.txt, repro > > > We were seeing some weird behaviour with limit based scan queries. In > particular, we see the following: > {noformat} > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > 6b470c3e43ee06d1 | -9218823070349964862 > 484b091ca97803cd | -8954822859271125729 > (2 rows) > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > c348aaec2f1e4b85 | -9218781105444826588 > {noformat} > In the table uuid is partition key, and it has a clustering key as well. > So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. > After some investigation, it seems to me like the issue is in the way > DataLimits handles static rows. Here is a patch for trunk > (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc) > which seems to fix it for me. Please take a look, seems like a pretty > critical issue to me. > I have forked the dtests for it as well. However, since trunk has some > failures already, I'm not fully sure how to infer the results. > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/ > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12178) Add prefixes to the name of snapshots created before a truncate or drop
[ https://issues.apache.org/jira/browse/CASSANDRA-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-12178: Resolution: Fixed Fix Version/s: (was: 3.x) 3.10 Status: Resolved (was: Patch Available) The test results look good, so +1, committed as {{3c00a0674a4e8b71ae25439dc2a0dece2f460d21}} to trunk. Thanks again! > Add prefixes to the name of snapshots created before a truncate or drop > --- > > Key: CASSANDRA-12178 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12178 > Project: Cassandra > Issue Type: Improvement >Reporter: Geoffrey Yu >Assignee: Geoffrey Yu >Priority: Minor > Fix For: 3.10 > > Attachments: 12178-3.0.txt, 12178-trunk.txt > > > It would be useful to be able to identify snapshots that are taken because a > table was truncated or dropped. We can do this by prepending a prefix to > snapshot names for snapshots that are created before a truncate/drop. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Prepend "dropped"/"truncated" to pre-drop/truncate snapshot names
Repository: cassandra Updated Branches: refs/heads/trunk 26976160e -> 3c00a0674 Prepend "dropped"/"truncated" to pre-drop/truncate snapshot names Patch by Geoffrey Yu; reviewed by Tyler Hobbs for CASSANDRA-12178 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3c00a067 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3c00a067 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3c00a067 Branch: refs/heads/trunk Commit: 3c00a0674a4e8b71ae25439dc2a0dece2f460d21 Parents: 2697616 Author: Geoffrey Yu Authored: Thu Jul 14 17:39:27 2016 -0500 Committer: Tyler Hobbs Committed: Thu Jul 14 17:39:27 2016 -0500 -- CHANGES.txt | 2 ++ NEWS.txt| 3 +++ src/java/org/apache/cassandra/config/Schema.java| 4 ++-- src/java/org/apache/cassandra/db/ColumnFamilyStore.java | 5 - src/java/org/apache/cassandra/db/Keyspace.java | 5 + 5 files changed, 16 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c00a067/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 0d1557b..ec4be3e 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,6 @@ 3.10 + * Prepend snapshot name with "truncated" or "dropped" when a snapshot + is taken before truncating or dropping a table (CASSANDRA-12178) * Optimize RestrictionSet (CASSANDRA-12153) * cqlsh does not automatically downgrade CQL version (CASSANDRA-12150) * Omit (de)serialization of state variable in UDAs (CASSANDRA-9613) http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c00a067/NEWS.txt -- diff --git a/NEWS.txt b/NEWS.txt index fd6f005..56fb8cf 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -29,6 +29,9 @@ New features testing and, as a consequence, it is not guaranteed to work in all cases. See CASSANDRA-12150 for more details. + - Snapshots that are automatically taken before a table is dropped or truncated + will have a "dropped" or "truncated" prefix on their snapshot tag name. + Upgrading - - Nothing specific to 3.10 but please see previous versions upgrading section, http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c00a067/src/java/org/apache/cassandra/config/Schema.java -- diff --git a/src/java/org/apache/cassandra/config/Schema.java b/src/java/org/apache/cassandra/config/Schema.java index 47d8198..dd42779 100644 --- a/src/java/org/apache/cassandra/config/Schema.java +++ b/src/java/org/apache/cassandra/config/Schema.java @@ -611,7 +611,7 @@ public class Schema public void dropKeyspace(String ksName) { KeyspaceMetadata ksm = Schema.instance.getKSMetaData(ksName); -String snapshotName = Keyspace.getTimestampedSnapshotName(ksName); +String snapshotName = Keyspace.getTimestampedSnapshotNameWithPrefix(ksName, ColumnFamilyStore.SNAPSHOT_DROP_PREFIX); CompactionManager.instance.interruptCompactionFor(ksm.tablesAndViews(), true); @@ -690,7 +690,7 @@ public class Schema CompactionManager.instance.interruptCompactionFor(Collections.singleton(cfm), true); if (DatabaseDescriptor.isAutoSnapshot()) -cfs.snapshot(Keyspace.getTimestampedSnapshotName(cfs.name)); + cfs.snapshot(Keyspace.getTimestampedSnapshotNameWithPrefix(cfs.name, ColumnFamilyStore.SNAPSHOT_DROP_PREFIX)); Keyspace.open(ksName).dropCf(cfm.cfId); MigrationManager.instance.notifyDropColumnFamily(cfm); http://git-wip-us.apache.org/repos/asf/cassandra/blob/3c00a067/src/java/org/apache/cassandra/db/ColumnFamilyStore.java -- diff --git a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java index 0a3ad52..938ea32 100644 --- a/src/java/org/apache/cassandra/db/ColumnFamilyStore.java +++ b/src/java/org/apache/cassandra/db/ColumnFamilyStore.java @@ -186,6 +186,9 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean private static final String SAMPLING_RESULTS_NAME = "SAMPLING_RESULTS"; private static final CompositeType SAMPLING_RESULT; +public static final String SNAPSHOT_TRUNCATE_PREFIX = "truncated"; +public static final String SNAPSHOT_DROP_PREFIX = "dropped"; + static { try @@ -2158,7 +2161,7 @@ public class ColumnFamilyStore implements ColumnFamilyStoreMBean data.notifyTruncated(truncatedAt); if (DatabaseDescriptor.isAutoSnapshot()) -snap
[jira] [Commented] (CASSANDRA-12202) LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378511#comment-15378511 ] Yuki Morishita commented on CASSANDRA-12202: The patches are backport of 11657, so basically they are fine. Though LongLeveledCompactionStrategyTest in 2.2 above failed. Any idea? > LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0 > --- > > Key: CASSANDRA-12202 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12202 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.x, 2.2.x, 3.0.x > > > We actually fixed this for 3.7+ in CASSANDRA-11657, need to backport that fix > to 2.1+ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378508#comment-15378508 ] Jason Brown edited comment on CASSANDRA-12127 at 7/14/16 10:29 PM: --- I've reviewed all four branches (haven't tried it out locally to see if the original problem is resolved), and they seem pretty reasonable. However I want to echo [~thobbs]'s concerns wrt the changes in {{ReversedType}}. we should carefully consider and test that one was (Author: jasobrown): I've reviewed all four branches (haven't tried it out locally to see if the original problem is resolved), but I want to echo [~thobbs]'s concerns wrt the changes in {{ReversedType}} > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}} was due to a bug. > I am of course open to discussion. -- This message was sent by Atlassian
[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378508#comment-15378508 ] Jason Brown commented on CASSANDRA-12127: - I've reviewed all four branches (haven't tried it out locally to see if the original problem is resolved), but I want to echo [~thobbs]'s concerns wrt the changes in {{ReversedType}} > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}} was due to a bug. > I am of course open to discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12207) CommitLogStressTest.testFixedSize times out
Joshua McKenzie created CASSANDRA-12207: --- Summary: CommitLogStressTest.testFixedSize times out Key: CASSANDRA-12207 URL: https://issues.apache.org/jira/browse/CASSANDRA-12207 Project: Cassandra Issue Type: Test Environment: cassandra-3.9 CI Reporter: Joshua McKenzie Assignee: Joshua McKenzie [CI report|http://cassci.datastax.com/job/cassandra-3.9_testall/23/testReport/org.apache.cassandra.db.commitlog/CommitLogStressTest/testFixedSize/] Been failing for awhile - times out. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12206) CommitLogTest.replaySimple failure
Joshua McKenzie created CASSANDRA-12206: --- Summary: CommitLogTest.replaySimple failure Key: CASSANDRA-12206 URL: https://issues.apache.org/jira/browse/CASSANDRA-12206 Project: Cassandra Issue Type: Test Environment: cassandra-3.9 CI Reporter: Joshua McKenzie Assignee: Joshua McKenzie Fails on both regular and compression runs. {noformat} Error Message expected:<2> but was:<8> Stacktrace junit.framework.AssertionFailedError: expected:<2> but was:<8> at org.apache.cassandra.db.commitlog.CommitLogTest.replaySimple(CommitLogTest.java:618) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378429#comment-15378429 ] Tyler Hobbs commented on CASSANDRA-12127: - I have not looked through the complete patches, but one thing that worries me is changing the behavior of {{ReversedType::compareCustom()}}. The new behavior is certain the correct behavior, but I'm not sure how this change might interact with existing, incorrectly ordered data on disk. It would be good to run some tests to determine what happens and how we can mitigate any problems before merging this. > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}} was due to a bug. > I am of course open to discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378426#comment-15378426 ] Joshua McKenzie commented on CASSANDRA-11465: - I'm also fine w/us leaving it failing until Paulo gets back and reviews CASSANDRA-11850. This should be done well in time for 3.9 so shouldn't block release. > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12178) Add prefixes to the name of snapshots created before a truncate or drop
[ https://issues.apache.org/jira/browse/CASSANDRA-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378423#comment-15378423 ] Geoffrey Yu commented on CASSANDRA-12178: - Okay that makes sense. Thanks for the quick review! > Add prefixes to the name of snapshots created before a truncate or drop > --- > > Key: CASSANDRA-12178 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12178 > Project: Cassandra > Issue Type: Improvement >Reporter: Geoffrey Yu >Assignee: Geoffrey Yu >Priority: Minor > Fix For: 3.x > > Attachments: 12178-3.0.txt, 12178-trunk.txt > > > It would be useful to be able to identify snapshots that are taken because a > table was truncated or dropped. We can do this by prepending a prefix to > snapshot names for snapshots that are created before a truncate/drop. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12178) Add prefixes to the name of snapshots created before a truncate or drop
[ https://issues.apache.org/jira/browse/CASSANDRA-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-12178: Fix Version/s: (was: 3.0.x) 3.x > Add prefixes to the name of snapshots created before a truncate or drop > --- > > Key: CASSANDRA-12178 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12178 > Project: Cassandra > Issue Type: Improvement >Reporter: Geoffrey Yu >Assignee: Geoffrey Yu >Priority: Minor > Fix For: 3.x > > Attachments: 12178-3.0.txt, 12178-trunk.txt > > > It would be useful to be able to identify snapshots that are taken because a > table was truncated or dropped. We can do this by prepending a prefix to > snapshot names for snapshots that are created before a truncate/drop. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12178) Add prefixes to the name of snapshots created before a truncate or drop
[ https://issues.apache.org/jira/browse/CASSANDRA-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378391#comment-15378391 ] Tyler Hobbs commented on CASSANDRA-12178: - The patch looks good to me, thanks! I've started CI test runs: ||branch||testall||dtest|| |[CASSANDRA-12178-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-12178-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12178-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-12178-trunk-dtest]| If the test results are good, I'll commit the patch. However, because this is an enhancement, not a bug fix, and it does change Cassandra's behavior, this should only be committed to trunk and not 3.0.x. > Add prefixes to the name of snapshots created before a truncate or drop > --- > > Key: CASSANDRA-12178 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12178 > Project: Cassandra > Issue Type: Improvement >Reporter: Geoffrey Yu >Assignee: Geoffrey Yu >Priority: Minor > Fix For: 3.0.x > > Attachments: 12178-3.0.txt, 12178-trunk.txt > > > It would be useful to be able to identify snapshots that are taken because a > table was truncated or dropped. We can do this by prepending a prefix to > snapshot names for snapshots that are created before a truncate/drop. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378357#comment-15378357 ] Jason Brown edited comment on CASSANDRA-12127 at 7/14/16 8:54 PM: -- [~blerer] In the 2.1 version, can you help me understand on [this line|https://github.com/apache/cassandra/compare/trunk...blerer:12127-2.1#diff-75ebe654dcf6c8c474f787abaf47bb68R923] why you are only checking the first element of the {{components}} list? UPDATE: nevermind, figured it out myself was (Author: jasobrown): [~blerer] In the 2.1 version, can you help me understand on [this line|https://github.com/apache/cassandra/compare/trunk...blerer:12127-2.1#diff-75ebe654dcf6c8c474f787abaf47bb68R923] why you are only checking the first element of the {{components}} list? > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}
[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378357#comment-15378357 ] Jason Brown commented on CASSANDRA-12127: - [~blerer] In the 2.1 version, can you help me understand on [this line|https://github.com/apache/cassandra/compare/trunk...blerer:12127-2.1#diff-75ebe654dcf6c8c474f787abaf47bb68R923] why you are only checking the first element of the {{components}} list? > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}} was due to a bug. > I am of course open to discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12177) sstabledump fails if sstable path includes dot
[ https://issues.apache.org/jira/browse/CASSANDRA-12177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378280#comment-15378280 ] Chris Lohfink commented on CASSANDRA-12177: --- it would backport pretty easily im pretty sure. The change is pretty isolated > sstabledump fails if sstable path includes dot > -- > > Key: CASSANDRA-12177 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12177 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Keith Wansbrough > > If there is a dot in the file path passed to sstabledump, it fails with an > error {{partitioner org.apache.cassandra.dht.Murmur3Partitioner does not > match system partitioner org.apache.cassandra.dht.LocalPartitioner.}} > I can work around this by renaming the directory containing the file, but it > seems like a bug. I expected the directory name to be irrelevant. > Example (assumes you have a keyspace test containing a table called sport, > but should repro with any keyspace/table): > {code} > $ cp -a /var/lib/cassandra/data/test/sport-ebe76350474e11e6879fc5e30fbb0e96 > testdir > $ sstabledump testdir/mb-1-big-Data.db > [ > { > "partition" : { > "key" : [ "2" ], > "position" : 0 > }, > "rows" : [ > { > "type" : "row", > "position" : 18, > "liveness_info" : { "tstamp" : "2016-07-11T10:15:22.766107Z" }, > "cells" : [ > { "name" : "score", "value" : "Golf" }, > { "name" : "sport_type", "value" : "5" } > ] > } > ] > } > ] > $ cp -a /var/lib/cassandra/data/test/sport-ebe76350474e11e6879fc5e30fbb0e96 > test.dir > $ sstabledump test.dir/mb-1-big-Data.db > ERROR 15:02:52 Cannot open /home/centos/test.dir/mb-1-big; partitioner > org.apache.cassandra.dht.Murmur3Partitioner does not match system partitioner > org.apache.cassandra.dht.LocalPartitioner. Note that the default partitioner > starting with Cassandra 1.2 is Murmur3Partitioner, so you will need to edit > that to match your old partitioner if upgrading. > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378244#comment-15378244 ] Alex Petrov commented on CASSANDRA-12193: - It might have been caused by re-sorting with {{ThriftResultsMerger}} in patch for [CASSANDRA-12123]. Removing it helps to resolve (but obviously breaks the other issue). > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test > -- > > Key: CASSANDRA-12193 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12193 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test > Failed on CassCI build upgrade_tests-all #59 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 146, in noncomposite_static_cf_test > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]) > File "/home/automaton/cassandra-dtest/assertions.py", line 162, in > assert_all > assert list_res == expected, "Expected {} from {}, but got > {}".format(expected, query, list_res) > "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', > 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', > 'Baggins']] from SELECT * FROM users, but got > [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']] > {code} > Related failure: > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12107) Fix range scans for table with live static rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-12107: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed into 3.0 at 84426d183ae095107bb264b92d828f231d0a9826 and merged into 3.9 and trunk. > Fix range scans for table with live static rows > --- > > Key: CASSANDRA-12107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12107 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sharvanath Pathak > Fix For: 3.0.9, 3.9 > > Attachments: 12107-3.0.txt, repro > > > We were seeing some weird behaviour with limit based scan queries. In > particular, we see the following: > {noformat} > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > 6b470c3e43ee06d1 | -9218823070349964862 > 484b091ca97803cd | -8954822859271125729 > (2 rows) > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > c348aaec2f1e4b85 | -9218781105444826588 > {noformat} > In the table uuid is partition key, and it has a clustering key as well. > So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. > After some investigation, it seems to me like the issue is in the way > DataLimits handles static rows. Here is a patch for trunk > (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc) > which seems to fix it for me. Please take a look, seems like a pretty > critical issue to me. > I have forked the dtests for it as well. However, since trunk has some > failures already, I'm not fully sure how to infer the results. > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/ > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[3/3] cassandra git commit: Merge branch cassandra-3.9 into trunk
Merge branch cassandra-3.9 into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/26976160 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/26976160 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/26976160 Branch: refs/heads/trunk Commit: 26976160ebbd6a15092ed467e3f24d76b80ee43b Parents: 35fbd7b 2764e85 Author: Benjamin Lerer Authored: Thu Jul 14 21:45:43 2016 +0200 Committer: Benjamin Lerer Committed: Thu Jul 14 21:46:15 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/filter/DataLimits.java | 3 +- .../validation/operations/SelectLimitTest.java | 32 +++- 3 files changed, 33 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/26976160/CHANGES.txt --
[2/3] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.9
Merge branch cassandra-3.0 into cassandra-3.9 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2764e85a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2764e85a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2764e85a Branch: refs/heads/trunk Commit: 2764e85a557c140f44ffc08c09e4b06a61e1ef4e Parents: 90afc58 84426d1 Author: Benjamin Lerer Authored: Thu Jul 14 21:44:34 2016 +0200 Committer: Benjamin Lerer Committed: Thu Jul 14 21:44:34 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/filter/DataLimits.java | 3 +- .../validation/operations/SelectLimitTest.java | 32 +++- 3 files changed, 33 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2764e85a/CHANGES.txt -- diff --cc CHANGES.txt index ba8e299,59f0a5f..4c46695 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,9 -1,5 +1,10 @@@ -3.0.9 +3.9 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in SP.sendMessagesToNonlocalDC (CASSANDRA-11950) + * Fix hdr logging for single operation workloads (CASSANDRA-12145) + * Fix SASI PREFIX search in CONTAINS mode with partial terms (CASSANDRA-12073) + * Increase size of flushExecutor thread pool (CASSANDRA-12071) +Merged from 3.0: + * Fix paging logic for deleted partitions with static columns (CASSANDRA-12107) * Wait until the message is being send to decide which serializer must be used (CASSANDRA-11393) * Fix migration of static thrift column names with non-text comparators (CASSANDRA-12147) * Fix upgrading sparse tables that are incorrectly marked as dense (CASSANDRA-11315) http://git-wip-us.apache.org/repos/asf/cassandra/blob/2764e85a/src/java/org/apache/cassandra/db/filter/DataLimits.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2764e85a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java -- diff --cc test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java index 528d9f6,aeb3d56..21c48dd --- a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java @@@ -26,7 -26,7 +26,6 @@@ import org.junit.Test import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.cql3.CQLTester; import org.apache.cassandra.dht.ByteOrderedPartitioner; --import org.apache.cassandra.exceptions.InvalidRequestException; public class SelectLimitTest extends CQLTester { @@@ -135,114 -135,33 +134,145 @@@ } @Test +public void testPerPartitionLimit() throws Throwable +{ +perPartitionLimitTest(false); +} + +@Test +public void testPerPartitionLimitWithCompactStorage() throws Throwable +{ +perPartitionLimitTest(true); +} + +private void perPartitionLimitTest(boolean withCompactStorage) throws Throwable +{ +String query = "CREATE TABLE %s (a int, b int, c int, PRIMARY KEY (a, b))"; + +if (withCompactStorage) +createTable(query + " WITH COMPACT STORAGE"); +else +createTable(query); + +for (int i = 0; i < 5; i++) +{ +for (int j = 0; j < 5; j++) +{ +execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", i, j, j); +} +} + +assertInvalidMessage("LIMIT must be strictly positive", + "SELECT * FROM %s PER PARTITION LIMIT ?", 0); +assertInvalidMessage("LIMIT must be strictly positive", + "SELECT * FROM %s PER PARTITION LIMIT ?", -1); + +assertRowsIgnoringOrder(execute("SELECT * FROM %s PER PARTITION LIMIT ?", 2), +row(0, 0, 0), +row(0, 1, 1), +row(1, 0, 0), +row(1, 1, 1), +row(2, 0, 0), +row(2, 1, 1), +row(3, 0, 0), +row(3, 1, 1), +row(4, 0, 0), +row(4, 1, 1)); + +// Combined Per Partition and "global" limit +assertRowCount(execute("SELECT * FROM %s PER PARTITION LIMIT ? LIMIT ?", 2, 6), + 6); + +// odd amount of results +assertRowCount(execute("SELECT * FROM %s PER PARTITION LIMIT ? LIMIT ?", 2, 5),
[1/3] cassandra git commit: Fix paging logic for deleted partitions with static columns
Repository: cassandra Updated Branches: refs/heads/trunk 35fbd7bc5 -> 26976160e Fix paging logic for deleted partitions with static columns patch by Sharvanath Pathak; reviewed by Benjamin Lerer for CASSANDRA-12107 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84426d18 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84426d18 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84426d18 Branch: refs/heads/trunk Commit: 84426d183ae095107bb264b92d828f231d0a9826 Parents: fbd287a Author: Sharvanath Pathak Authored: Thu Jul 14 21:38:14 2016 +0200 Committer: Benjamin Lerer Committed: Thu Jul 14 21:38:14 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/filter/DataLimits.java | 3 +- .../validation/operations/SelectLimitTest.java | 31 3 files changed, 33 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3829046..59f0a5f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.9 + * Fix paging logic for deleted partitions with static columns (CASSANDRA-12107) * Wait until the message is being send to decide which serializer must be used (CASSANDRA-11393) * Fix migration of static thrift column names with non-text comparators (CASSANDRA-12147) * Fix upgrading sparse tables that are incorrectly marked as dense (CASSANDRA-11315) http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/src/java/org/apache/cassandra/db/filter/DataLimits.java -- diff --git a/src/java/org/apache/cassandra/db/filter/DataLimits.java b/src/java/org/apache/cassandra/db/filter/DataLimits.java index f6fdcdd..94f43dc 100644 --- a/src/java/org/apache/cassandra/db/filter/DataLimits.java +++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java @@ -360,8 +360,7 @@ public abstract class DataLimits public void applyToPartition(DecoratedKey partitionKey, Row staticRow) { rowInCurrentPartition = 0; -if (!staticRow.isEmpty() && (assumeLiveData || staticRow.hasLiveData(nowInSec))) -hasLiveStaticRow = true; +hasLiveStaticRow = !staticRow.isEmpty() && (assumeLiveData || staticRow.hasLiveData(nowInSec)); } @Override http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java index a21ef3c..aeb3d56 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java @@ -133,4 +133,35 @@ public class SelectLimitTest extends CQLTester row(2, 2), row(2, 3)); } + +@Test +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable +{ +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, PRIMARY KEY (pk, c))"); + +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)"); + +assertRows(execute("SELECT * FROM %s"), + row(1, -1, 1, 1), + row(2, -1, 1, 1), + row(3, -1, 1, 1), + row(4, -1, 1, 1), + row(5, -1, 1, 1)); + +execute("DELETE FROM %s WHERE pk = 2"); + +assertRows(execute("SELECT * FROM %s"), + row(1, -1, 1, 1), + row(3, -1, 1, 1), + row(4, -1, 1, 1), + row(5, -1, 1, 1)); + +assertRows(execute("SELECT * FROM %s LIMIT 2"), + row(1, -1, 1, 1), + row(3, -1, 1, 1)); +} }
[1/2] cassandra git commit: Fix paging logic for deleted partitions with static columns
Repository: cassandra Updated Branches: refs/heads/cassandra-3.9 90afc58d3 -> 2764e85a5 Fix paging logic for deleted partitions with static columns patch by Sharvanath Pathak; reviewed by Benjamin Lerer for CASSANDRA-12107 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84426d18 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84426d18 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84426d18 Branch: refs/heads/cassandra-3.9 Commit: 84426d183ae095107bb264b92d828f231d0a9826 Parents: fbd287a Author: Sharvanath Pathak Authored: Thu Jul 14 21:38:14 2016 +0200 Committer: Benjamin Lerer Committed: Thu Jul 14 21:38:14 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/filter/DataLimits.java | 3 +- .../validation/operations/SelectLimitTest.java | 31 3 files changed, 33 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3829046..59f0a5f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.9 + * Fix paging logic for deleted partitions with static columns (CASSANDRA-12107) * Wait until the message is being send to decide which serializer must be used (CASSANDRA-11393) * Fix migration of static thrift column names with non-text comparators (CASSANDRA-12147) * Fix upgrading sparse tables that are incorrectly marked as dense (CASSANDRA-11315) http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/src/java/org/apache/cassandra/db/filter/DataLimits.java -- diff --git a/src/java/org/apache/cassandra/db/filter/DataLimits.java b/src/java/org/apache/cassandra/db/filter/DataLimits.java index f6fdcdd..94f43dc 100644 --- a/src/java/org/apache/cassandra/db/filter/DataLimits.java +++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java @@ -360,8 +360,7 @@ public abstract class DataLimits public void applyToPartition(DecoratedKey partitionKey, Row staticRow) { rowInCurrentPartition = 0; -if (!staticRow.isEmpty() && (assumeLiveData || staticRow.hasLiveData(nowInSec))) -hasLiveStaticRow = true; +hasLiveStaticRow = !staticRow.isEmpty() && (assumeLiveData || staticRow.hasLiveData(nowInSec)); } @Override http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java index a21ef3c..aeb3d56 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java @@ -133,4 +133,35 @@ public class SelectLimitTest extends CQLTester row(2, 2), row(2, 3)); } + +@Test +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable +{ +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, PRIMARY KEY (pk, c))"); + +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)"); + +assertRows(execute("SELECT * FROM %s"), + row(1, -1, 1, 1), + row(2, -1, 1, 1), + row(3, -1, 1, 1), + row(4, -1, 1, 1), + row(5, -1, 1, 1)); + +execute("DELETE FROM %s WHERE pk = 2"); + +assertRows(execute("SELECT * FROM %s"), + row(1, -1, 1, 1), + row(3, -1, 1, 1), + row(4, -1, 1, 1), + row(5, -1, 1, 1)); + +assertRows(execute("SELECT * FROM %s LIMIT 2"), + row(1, -1, 1, 1), + row(3, -1, 1, 1)); +} }
[2/2] cassandra git commit: Merge branch cassandra-3.0 into cassandra-3.9
Merge branch cassandra-3.0 into cassandra-3.9 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/2764e85a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/2764e85a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/2764e85a Branch: refs/heads/cassandra-3.9 Commit: 2764e85a557c140f44ffc08c09e4b06a61e1ef4e Parents: 90afc58 84426d1 Author: Benjamin Lerer Authored: Thu Jul 14 21:44:34 2016 +0200 Committer: Benjamin Lerer Committed: Thu Jul 14 21:44:34 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/filter/DataLimits.java | 3 +- .../validation/operations/SelectLimitTest.java | 32 +++- 3 files changed, 33 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2764e85a/CHANGES.txt -- diff --cc CHANGES.txt index ba8e299,59f0a5f..4c46695 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,9 -1,5 +1,10 @@@ -3.0.9 +3.9 + * Partial revert of CASSANDRA-11971, cannot recycle buffer in SP.sendMessagesToNonlocalDC (CASSANDRA-11950) + * Fix hdr logging for single operation workloads (CASSANDRA-12145) + * Fix SASI PREFIX search in CONTAINS mode with partial terms (CASSANDRA-12073) + * Increase size of flushExecutor thread pool (CASSANDRA-12071) +Merged from 3.0: + * Fix paging logic for deleted partitions with static columns (CASSANDRA-12107) * Wait until the message is being send to decide which serializer must be used (CASSANDRA-11393) * Fix migration of static thrift column names with non-text comparators (CASSANDRA-12147) * Fix upgrading sparse tables that are incorrectly marked as dense (CASSANDRA-11315) http://git-wip-us.apache.org/repos/asf/cassandra/blob/2764e85a/src/java/org/apache/cassandra/db/filter/DataLimits.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/2764e85a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java -- diff --cc test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java index 528d9f6,aeb3d56..21c48dd --- a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java @@@ -26,7 -26,7 +26,6 @@@ import org.junit.Test import org.apache.cassandra.config.DatabaseDescriptor; import org.apache.cassandra.cql3.CQLTester; import org.apache.cassandra.dht.ByteOrderedPartitioner; --import org.apache.cassandra.exceptions.InvalidRequestException; public class SelectLimitTest extends CQLTester { @@@ -135,114 -135,33 +134,145 @@@ } @Test +public void testPerPartitionLimit() throws Throwable +{ +perPartitionLimitTest(false); +} + +@Test +public void testPerPartitionLimitWithCompactStorage() throws Throwable +{ +perPartitionLimitTest(true); +} + +private void perPartitionLimitTest(boolean withCompactStorage) throws Throwable +{ +String query = "CREATE TABLE %s (a int, b int, c int, PRIMARY KEY (a, b))"; + +if (withCompactStorage) +createTable(query + " WITH COMPACT STORAGE"); +else +createTable(query); + +for (int i = 0; i < 5; i++) +{ +for (int j = 0; j < 5; j++) +{ +execute("INSERT INTO %s (a, b, c) VALUES (?, ?, ?)", i, j, j); +} +} + +assertInvalidMessage("LIMIT must be strictly positive", + "SELECT * FROM %s PER PARTITION LIMIT ?", 0); +assertInvalidMessage("LIMIT must be strictly positive", + "SELECT * FROM %s PER PARTITION LIMIT ?", -1); + +assertRowsIgnoringOrder(execute("SELECT * FROM %s PER PARTITION LIMIT ?", 2), +row(0, 0, 0), +row(0, 1, 1), +row(1, 0, 0), +row(1, 1, 1), +row(2, 0, 0), +row(2, 1, 1), +row(3, 0, 0), +row(3, 1, 1), +row(4, 0, 0), +row(4, 1, 1)); + +// Combined Per Partition and "global" limit +assertRowCount(execute("SELECT * FROM %s PER PARTITION LIMIT ? LIMIT ?", 2, 6), + 6); + +// odd amount of results +assertRowCount(execute("SELECT * FROM %s PER PARTITION LIMIT ? LIMIT ?"
[jira] [Updated] (CASSANDRA-12096) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users
[ https://issues.apache.org/jira/browse/CASSANDRA-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12096: Resolution: Fixed Status: Resolved (was: Patch Available) > dtest failure in > consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users > - > > Key: CASSANDRA-12096 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12096 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, > node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, > node5_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/407/testReport/consistency_test/TestAccuracy/test_simple_strategy_each_quorum_users > Failed on CassCI build trunk_novnode_dtest #407 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 591, in > test_simple_strategy_each_quorum_users > > self._run_test_function_in_parallel(TestAccuracy.Validation.validate_users, > [self.nodes], [self.rf], combinations) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 543, in > _run_test_function_in_parallel > assert False, err.message > 'Error from server: code=2200 [Invalid query] message="unconfigured table > users" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Fix paging logic for deleted partitions with static columns
Repository: cassandra Updated Branches: refs/heads/cassandra-3.0 fbd287ad2 -> 84426d183 Fix paging logic for deleted partitions with static columns patch by Sharvanath Pathak; reviewed by Benjamin Lerer for CASSANDRA-12107 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/84426d18 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/84426d18 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/84426d18 Branch: refs/heads/cassandra-3.0 Commit: 84426d183ae095107bb264b92d828f231d0a9826 Parents: fbd287a Author: Sharvanath Pathak Authored: Thu Jul 14 21:38:14 2016 +0200 Committer: Benjamin Lerer Committed: Thu Jul 14 21:38:14 2016 +0200 -- CHANGES.txt | 1 + .../apache/cassandra/db/filter/DataLimits.java | 3 +- .../validation/operations/SelectLimitTest.java | 31 3 files changed, 33 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3829046..59f0a5f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.9 + * Fix paging logic for deleted partitions with static columns (CASSANDRA-12107) * Wait until the message is being send to decide which serializer must be used (CASSANDRA-11393) * Fix migration of static thrift column names with non-text comparators (CASSANDRA-12147) * Fix upgrading sparse tables that are incorrectly marked as dense (CASSANDRA-11315) http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/src/java/org/apache/cassandra/db/filter/DataLimits.java -- diff --git a/src/java/org/apache/cassandra/db/filter/DataLimits.java b/src/java/org/apache/cassandra/db/filter/DataLimits.java index f6fdcdd..94f43dc 100644 --- a/src/java/org/apache/cassandra/db/filter/DataLimits.java +++ b/src/java/org/apache/cassandra/db/filter/DataLimits.java @@ -360,8 +360,7 @@ public abstract class DataLimits public void applyToPartition(DecoratedKey partitionKey, Row staticRow) { rowInCurrentPartition = 0; -if (!staticRow.isEmpty() && (assumeLiveData || staticRow.hasLiveData(nowInSec))) -hasLiveStaticRow = true; +hasLiveStaticRow = !staticRow.isEmpty() && (assumeLiveData || staticRow.hasLiveData(nowInSec)); } @Override http://git-wip-us.apache.org/repos/asf/cassandra/blob/84426d18/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java index a21ef3c..aeb3d56 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/operations/SelectLimitTest.java @@ -133,4 +133,35 @@ public class SelectLimitTest extends CQLTester row(2, 2), row(2, 3)); } + +@Test +public void testLimitWithDeletedRowsAndStaticColumns() throws Throwable +{ +createTable("CREATE TABLE %s (pk int, c int, v int, s int static, PRIMARY KEY (pk, c))"); + +execute("INSERT INTO %s (pk, c, v, s) VALUES (1, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (2, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (3, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (4, -1, 1, 1)"); +execute("INSERT INTO %s (pk, c, v, s) VALUES (5, -1, 1, 1)"); + +assertRows(execute("SELECT * FROM %s"), + row(1, -1, 1, 1), + row(2, -1, 1, 1), + row(3, -1, 1, 1), + row(4, -1, 1, 1), + row(5, -1, 1, 1)); + +execute("DELETE FROM %s WHERE pk = 2"); + +assertRows(execute("SELECT * FROM %s"), + row(1, -1, 1, 1), + row(3, -1, 1, 1), + row(4, -1, 1, 1), + row(5, -1, 1, 1)); + +assertRows(execute("SELECT * FROM %s LIMIT 2"), + row(1, -1, 1, 1), + row(3, -1, 1, 1)); +} }
[jira] [Updated] (CASSANDRA-12107) Fix range scans for table with live static rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-12107: --- Attachment: 12107-3.0.txt Thanks for the scenario to reproduce the problem and the patch. The patch looks good to me. I have just added a unit test to it. ||utests||dtests|| |[3.0|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12107-3.0-testall/]|[3.0|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12107-3.0-dtest/]| |[3.9|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12107-3.9-testall/]|[3.9|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-12107-3.9-dtest/]| The failing tests do not look related to the change. > Fix range scans for table with live static rows > --- > > Key: CASSANDRA-12107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12107 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sharvanath Pathak > Fix For: 3.0.9, 3.9 > > Attachments: 12107-3.0.txt, repro > > > We were seeing some weird behaviour with limit based scan queries. In > particular, we see the following: > {noformat} > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > 6b470c3e43ee06d1 | -9218823070349964862 > 484b091ca97803cd | -8954822859271125729 > (2 rows) > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > c348aaec2f1e4b85 | -9218781105444826588 > {noformat} > In the table uuid is partition key, and it has a clustering key as well. > So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. > After some investigation, it seems to me like the issue is in the way > DataLimits handles static rows. Here is a patch for trunk > (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc) > which seems to fix it for me. Please take a look, seems like a pretty > critical issue to me. > I have forked the dtests for it as well. However, since trunk has some > failures already, I'm not fully sure how to infer the results. > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/ > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11519) Add support for IBM POWER
[ https://issues.apache.org/jira/browse/CASSANDRA-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378174#comment-15378174 ] Rei Odaira commented on CASSANDRA-11519: I have updated the patches to enable the unaligned Unsafe operations only for POWER8 and later processors. They use the SIGAR library to get the CPU model. > Add support for IBM POWER > - > > Key: CASSANDRA-11519 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11519 > Project: Cassandra > Issue Type: Improvement > Components: Core > Environment: POWER architecture >Reporter: Rei Odaira >Assignee: Rei Odaira >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 11519-2.2.txt, 11519-3.0.txt > > > Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in > org.apache.cassandra.utils.FastByteOperations, > org.apache.cassandra.utils.memory.MemoryUtil, and > org.apache.cassandra.io.util.Memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10810) Make rebuild operations resumable
[ https://issues.apache.org/jira/browse/CASSANDRA-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378175#comment-15378175 ] Kaide Mu commented on CASSANDRA-10810: -- Rebuild operation is now resumable, shortly I'll add a new dtest. > Make rebuild operations resumable > - > > Key: CASSANDRA-10810 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10810 > Project: Cassandra > Issue Type: Wish > Components: Streaming and Messaging >Reporter: Jeremy Hanna >Assignee: Kaide Mu > Fix For: 3.x > > > Related to CASSANDRA-8942, now that we can resume bootstrap operations, this > could also be possible with rebuild operations, such as when you bootstrap > new nodes in a completely new datacenter in two steps. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9507) range metrics are not updated for timeout and unavailable in StorageProxy
[ https://issues.apache.org/jira/browse/CASSANDRA-9507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-9507: -- Reviewer: Alex Petrov (was: Benjamin Lerer) > range metrics are not updated for timeout and unavailable in StorageProxy > - > > Key: CASSANDRA-9507 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9507 > Project: Cassandra > Issue Type: Bug > Components: Observability >Reporter: sankalp kohli >Assignee: Nachiket Patil >Priority: Minor > Attachments: CASANDRA-9507 trunk.diff, CASSANDRA-9507 v2.1.diff, > CASSANDRA-9507 v2.2.diff, CASSANDRA-9507 v3.0.diff > > > Looking at the code, it looks like range metrics are not updated for timeouts > and unavailable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11519) Add support for IBM POWER
[ https://issues.apache.org/jira/browse/CASSANDRA-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rei Odaira updated CASSANDRA-11519: --- Attachment: (was: 11519-3.0.txt) > Add support for IBM POWER > - > > Key: CASSANDRA-11519 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11519 > Project: Cassandra > Issue Type: Improvement > Components: Core > Environment: POWER architecture >Reporter: Rei Odaira >Assignee: Rei Odaira >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 11519-2.2.txt, 11519-3.0.txt > > > Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in > org.apache.cassandra.utils.FastByteOperations, > org.apache.cassandra.utils.memory.MemoryUtil, and > org.apache.cassandra.io.util.Memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11519) Add support for IBM POWER
[ https://issues.apache.org/jira/browse/CASSANDRA-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rei Odaira updated CASSANDRA-11519: --- Attachment: 11519-3.0.txt 11519-2.2.txt > Add support for IBM POWER > - > > Key: CASSANDRA-11519 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11519 > Project: Cassandra > Issue Type: Improvement > Components: Core > Environment: POWER architecture >Reporter: Rei Odaira >Assignee: Rei Odaira >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 11519-2.2.txt, 11519-3.0.txt > > > Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in > org.apache.cassandra.utils.FastByteOperations, > org.apache.cassandra.utils.memory.MemoryUtil, and > org.apache.cassandra.io.util.Memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11519) Add support for IBM POWER
[ https://issues.apache.org/jira/browse/CASSANDRA-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rei Odaira updated CASSANDRA-11519: --- Attachment: (was: 11519-2.1.txt) > Add support for IBM POWER > - > > Key: CASSANDRA-11519 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11519 > Project: Cassandra > Issue Type: Improvement > Components: Core > Environment: POWER architecture >Reporter: Rei Odaira >Assignee: Rei Odaira >Priority: Minor > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 11519-3.0.txt > > > Add support for the IBM POWER architecture (ppc, ppc64, and ppc64le) in > org.apache.cassandra.utils.FastByteOperations, > org.apache.cassandra.utils.memory.MemoryUtil, and > org.apache.cassandra.io.util.Memory. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-10810) Make rebuild operations resumable
[ https://issues.apache.org/jira/browse/CASSANDRA-10810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kaide Mu reassigned CASSANDRA-10810: Assignee: Kaide Mu > Make rebuild operations resumable > - > > Key: CASSANDRA-10810 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10810 > Project: Cassandra > Issue Type: Wish > Components: Streaming and Messaging >Reporter: Jeremy Hanna >Assignee: Kaide Mu > Fix For: 3.x > > > Related to CASSANDRA-8942, now that we can resume bootstrap operations, this > could also be possible with rebuild operations, such as when you bootstrap > new nodes in a completely new datacenter in two steps. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15378125#comment-15378125 ] Jim Witschey commented on CASSANDRA-11465: -- bq. we may want to spend more time looking into [doing synchronous CL.ALL writes] This is worth looking into, but with this caveat: in the dtest environment, timeouts can make {{CL.ALL}} calls can actually make tests flakier. I don't have a concrete example of this, but that's my memory. > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377622#comment-15377622 ] Tyler Hobbs commented on CASSANDRA-11465: - FWIW, I spent a little time trying to make tracing more reliable for tests in CASSANDRA-11928 by doing synchronous CL.ALL writes when a system flag was present. Unfortunately, this appeared to cause some kind of deadlock, and it didn't seem worth it to investigate further. However, if this is a problem across many tests, we may want to spend more time looking into that. > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11687) dtest failure in rebuild_test.TestRebuild.simple_rebuild_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377341#comment-15377341 ] Russ Hatch commented on CASSANDRA-11687: might help to know the actual value when we fail with this message: https://github.com/riptano/cassandra-dtest/pull/1097 > dtest failure in rebuild_test.TestRebuild.simple_rebuild_test > - > > Key: CASSANDRA-11687 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11687 > Project: Cassandra > Issue Type: Test >Reporter: Russ Hatch >Assignee: DS Test Eng > Labels: dtest > > single failure on most recent run (3.0 no-vnode) > {noformat} > concurrent rebuild should not be allowed, but one rebuild command should have > succeeded. > {noformat} > http://cassci.datastax.com/job/cassandra-3.0_novnode_dtest/217/testReport/rebuild_test/TestRebuild/simple_rebuild_test > Failed on CassCI build cassandra-3.0_novnode_dtest #217 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12172) Fail to bootstrap new node.
[ https://issues.apache.org/jira/browse/CASSANDRA-12172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377330#comment-15377330 ] Dikang Gu commented on CASSANDRA-12172: --- and some errors like this in the log: {code} 2016-07-14_04:10:47.10885 WARN 04:10:47 [SharedPool-Worker-6]: Uncaught exception on thread Thread[SharedPool-Worker-6,5,main]: {} 2016-07-14_04:10:47.10887 java.lang.NullPointerException: null 2016-07-14_04:10:47.10887 at org.apache.cassandra.service.StorageService.isRpcReady(StorageService.java:1842) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10887 at org.apache.cassandra.service.StorageService.notifyUp(StorageService.java:1800) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10888 at org.apache.cassandra.service.StorageService.onAlive(StorageService.java:2379) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10888 at org.apache.cassandra.gms.Gossiper.realMarkAlive(Gossiper.java:982) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10888 at org.apache.cassandra.gms.Gossiper$3.response(Gossiper.java:962) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10888 at org.apache.cassandra.net.ResponseVerbHandler.doVerb(ResponseVerbHandler.java:53) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10889 at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:67) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10889 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_45] 2016-07-14_04:10:47.10889 at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10889 at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) [apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10890 at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] 2016-07-14_04:10:47.10890 at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] {code} > Fail to bootstrap new node. > --- > > Key: CASSANDRA-12172 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12172 > Project: Cassandra > Issue Type: Bug >Reporter: Dikang Gu > > When I try to bootstrap new node in the cluster, sometimes it failed because > of following exceptions. > {code} > 2016-07-12_05:14:55.58509 INFO 05:14:55 [main]: JOINING: Starting to > bootstrap... > 2016-07-12_05:14:56.07491 INFO 05:14:56 [GossipTasks:1]: InetAddress > /2401:db00:2011:50c7:face:0:9:0 is now DOWN > 2016-07-12_05:14:56.32219 Exception (java.lang.RuntimeException) encountered > during startup: A node required to move the data consistently is down > (/2401:db00:2011:50c7:face:0:9:0). If you wish to move the data from a > potentially inconsis > tent replica, restart the node with -Dcassandra.consistent.rangemovement=false > 2016-07-12_05:14:56.32582 ERROR 05:14:56 [main]: Exception encountered during > startup > 2016-07-12_05:14:56.32583 java.lang.RuntimeException: A node required to move > the data consistently is down (/2401:db00:2011:50c7:face:0:9:0). If you wish > to move the data from a potentially inconsistent replica, restart the node > with -Dc > assandra.consistent.rangemovement=false > 2016-07-12_05:14:56.32584 at > org.apache.cassandra.dht.RangeStreamer.getAllRangesWithStrictSourcesFor(RangeStreamer.java:264) > ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] > 2016-07-12_05:14:56.32584 at > org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:147) > ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] > 2016-07-12_05:14:56.32584 at > org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:82) > ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] > 2016-07-12_05:14:56.32584 at > org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1230) > ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git20160315.c29948b] > 2016-07-12_05:14:56.32584 at > org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:924) > ~[apache-cassandra-2.2.5+git20160315.c29948b.jar:2.2.5+git201
[jira] [Updated] (CASSANDRA-12204) sstable2json should let the user know that failure might have occurred due to lack of disk space
[ https://issues.apache.org/jira/browse/CASSANDRA-12204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thanh updated CASSANDRA-12204: -- Description: $ sstable2json testks_0101-datatable-ka-61613-Data.db > ~/json61613 java.io.IOException: Error writing output stream at org.apache.cassandra.tools.SSTableExport.checkStream(SSTableExport.java:82) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:369) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:382) at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:467) The above error doesn't give the user any clue as to what happened/why it errored. It turns out, the above, can result from running out of disk space, which can happen if you're trying to write out the json of a very large sstable. sstable2json should let the user know that he/she might be out of disk space. was: $ sstable2json testks_0101-datatable-ka-61613-Data.db > ~/json61613 java.io.IOException: Error writing output stream at org.apache.cassandra.tools.SSTableExport.checkStream(SSTableExport.java:82) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:369) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:382) at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:467) The above error doesn't give the user any clue as to what happened/why it errored. It turns out, the above, can result from running out of disk space, which can happen if you're trying to write out the json of a very large sstable. sstable2jon should let the user know that he/she might be out of disk space. > sstable2json should let the user know that failure might have occurred due to > lack of disk space > > > Key: CASSANDRA-12204 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12204 > Project: Cassandra > Issue Type: Improvement >Reporter: Thanh >Priority: Minor > > $ sstable2json testks_0101-datatable-ka-61613-Data.db > ~/json61613 > java.io.IOException: Error writing output stream > at > org.apache.cassandra.tools.SSTableExport.checkStream(SSTableExport.java:82) > at > org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344) > at > org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:369) > at > org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:382) > at > org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:467) > The above error doesn't give the user any clue as to what happened/why it > errored. > It turns out, the above, can result from running out of disk space, which can > happen if you're trying to write out the json of a very large sstable. > sstable2json should let the user know that he/she might be out of disk space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12205) nodetool tablestats sstable count missing.
Cameron MacMinn created CASSANDRA-12205: --- Summary: nodetool tablestats sstable count missing. Key: CASSANDRA-12205 URL: https://issues.apache.org/jira/browse/CASSANDRA-12205 Project: Cassandra Issue Type: Bug Components: Tools Environment: Cassandra 3.7 Reporter: Cameron MacMinn Attachments: bad.txt, good.txt As a user, I have used nodetool cfstats since v2.1. The most useful line is the 1 like 'SSTable count: 12'. As a user, I want v3.7 nodetool tablestats to continue showing SStable count. At the moment, SStable count is missing from the output. Examples attached. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12204) sstable2json should let the user know that failure might have occurred due to lack of disk space
Thanh created CASSANDRA-12204: - Summary: sstable2json should let the user know that failure might have occurred due to lack of disk space Key: CASSANDRA-12204 URL: https://issues.apache.org/jira/browse/CASSANDRA-12204 Project: Cassandra Issue Type: Improvement Reporter: Thanh Priority: Minor $ sstable2json testks_0101-datatable-ka-61613-Data.db > ~/json61613 java.io.IOException: Error writing output stream at org.apache.cassandra.tools.SSTableExport.checkStream(SSTableExport.java:82) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:369) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:382) at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:467) The above error doesn't give the user any clue as to what happened/why it errored. It turns out, the above, can result from running out of disk space, which can happen if you're trying to write out the json of a very large sstable. sstable2jon should let the user know they he/she might be out of disk space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12204) sstable2json should let the user know that failure might have occurred due to lack of disk space
[ https://issues.apache.org/jira/browse/CASSANDRA-12204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thanh updated CASSANDRA-12204: -- Description: $ sstable2json testks_0101-datatable-ka-61613-Data.db > ~/json61613 java.io.IOException: Error writing output stream at org.apache.cassandra.tools.SSTableExport.checkStream(SSTableExport.java:82) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:369) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:382) at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:467) The above error doesn't give the user any clue as to what happened/why it errored. It turns out, the above, can result from running out of disk space, which can happen if you're trying to write out the json of a very large sstable. sstable2jon should let the user know that he/she might be out of disk space. was: $ sstable2json testks_0101-datatable-ka-61613-Data.db > ~/json61613 java.io.IOException: Error writing output stream at org.apache.cassandra.tools.SSTableExport.checkStream(SSTableExport.java:82) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:369) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:382) at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:467) The above error doesn't give the user any clue as to what happened/why it errored. It turns out, the above, can result from running out of disk space, which can happen if you're trying to write out the json of a very large sstable. sstable2jon should let the user know they he/she might be out of disk space. > sstable2json should let the user know that failure might have occurred due to > lack of disk space > > > Key: CASSANDRA-12204 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12204 > Project: Cassandra > Issue Type: Improvement >Reporter: Thanh >Priority: Minor > > $ sstable2json testks_0101-datatable-ka-61613-Data.db > ~/json61613 > java.io.IOException: Error writing output stream > at > org.apache.cassandra.tools.SSTableExport.checkStream(SSTableExport.java:82) > at > org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:344) > at > org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:369) > at > org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:382) > at > org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:467) > The above error doesn't give the user any clue as to what happened/why it > errored. > It turns out, the above, can result from running out of disk space, which can > happen if you're trying to write out the json of a very large sstable. > sstable2jon should let the user know that he/she might be out of disk space. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11198) Materialized view inconsistency
[ https://issues.apache.org/jira/browse/CASSANDRA-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377293#comment-15377293 ] Benjamin Roth commented on CASSANDRA-11198: --- I have a case to reproduce inconsistencies reliably. I have the following keyspace: https://gist.github.com/brstgt/9e14373a0d9847cde28395d228fc0ce9 Table visits_in is populated with 99 x 99 records of 100 visitors visiting each other except oneself. So "SELECT count(*) FROM visits_out_mv WHERE user_id_visitor = $x" should always return 99 and "SELECT count(*) FROM visits_in WHERE user_id = $x" also should return 99. I have a little PHP script to drop + create MVs and check the state: https://gist.github.com/brstgt/c1ecc4f29e8be10cc1f7917829a75ea8 And this is the output https://gist.github.com/brstgt/690732a194ba50c0ffaf652f051bce2c No matter how often I run that script, the output remains similar. Results: - View visits_out_mv shows correct result after some time (as expected) - View visits_out_mv2 NEVER shows correct result, no matter how long I wait - Apparently there are never any views in build, although the view is obviously not ready (see SELECT count(*) FROM system.views_builds_in_progress) The tests run on a c* cluster with 3 nodes on 3.0.6 and 3 nodes on 3.0.8. I observed this bevhaviour also in our production environment with millions of records and hours of waiting for correct results. Even repair on base table does not help. I hope this is the right ticket for that comment, otherwise let me know :) And let me know if I can help to provide more information > Materialized view inconsistency > --- > > Key: CASSANDRA-11198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11198 > Project: Cassandra > Issue Type: Bug >Reporter: Gábor Auth >Assignee: Carl Yeksigian > Attachments: CASSANDRA-11198.trace > > > Here is a materialized view: > {code} > > DESCRIBE MATERIALIZED VIEW unit_by_transport ; > CREATE MATERIALIZED VIEW unit_by_transport AS > SELECT * > FROM unit > WHERE transportid IS NOT NULL AND type IS NOT NULL > PRIMARY KEY (transportid, id) > WITH CLUSTERING ORDER BY (id ASC) > AND bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > I cannot reproduce this but sometimes and somehow happened the same issue > (https://issues.apache.org/jira/browse/CASSANDRA-10910): > {code} > > SELECT transportid, id, type FROM unit_by_transport WHERE > > transportid=24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 and > > id=99c05a70-d686-11e5-a169-97287061d5d1; > transportid | id > | type > --+--+-- > 24f90d20-d61f-11e5-9d3c-8fc3ad6906e2 | 99c05a70-d686-11e5-a169-97287061d5d1 > | null > (1 rows) > > SELECT transportid, id, type FROM unit WHERE > > id=99c05a70-d686-11e5-a169-97287061d5d1; > transportid | id | type > -++-- > (0 rows) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11698) dtest failure in materialized_views_test.TestMaterializedViews.clustering_column_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-11698: Resolution: Fixed Status: Resolved (was: Patch Available) > dtest failure in > materialized_views_test.TestMaterializedViews.clustering_column_test > - > > Key: CASSANDRA-11698 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11698 > Project: Cassandra > Issue Type: Bug >Reporter: Russ Hatch >Assignee: Carl Yeksigian > Labels: dtest > Attachments: node1.log, node1_debug.log, node2.log, node2_debug.log, > node3.log, node3_debug.log > > > recent failure, test has flapped before a while back. > {noformat} > Expecting 2 users, got 1 > {noformat} > http://cassci.datastax.com/job/cassandra-3.0_dtest/688/testReport/materialized_views_test/TestMaterializedViews/clustering_column_test > Failed on CassCI build cassandra-3.0_dtest #688 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12096) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users
[ https://issues.apache.org/jira/browse/CASSANDRA-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12096: Reviewer: Jim Witschey > dtest failure in > consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users > - > > Key: CASSANDRA-12096 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12096 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, > node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, > node5_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/407/testReport/consistency_test/TestAccuracy/test_simple_strategy_each_quorum_users > Failed on CassCI build trunk_novnode_dtest #407 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 591, in > test_simple_strategy_each_quorum_users > > self._run_test_function_in_parallel(TestAccuracy.Validation.validate_users, > [self.nodes], [self.rf], combinations) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 543, in > _run_test_function_in_parallel > assert False, err.message > 'Error from server: code=2200 [Invalid query] message="unconfigured table > users" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12096) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users
[ https://issues.apache.org/jira/browse/CASSANDRA-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12096: Status: Patch Available (was: Open) https://github.com/riptano/cassandra-dtest/pull/1096 > dtest failure in > consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users > - > > Key: CASSANDRA-12096 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12096 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, > node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, > node5_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/407/testReport/consistency_test/TestAccuracy/test_simple_strategy_each_quorum_users > Failed on CassCI build trunk_novnode_dtest #407 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 591, in > test_simple_strategy_each_quorum_users > > self._run_test_function_in_parallel(TestAccuracy.Validation.validate_users, > [self.nodes], [self.rf], combinations) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 543, in > _run_test_function_in_parallel > assert False, err.message > 'Error from server: code=2200 [Invalid query] message="unconfigured table > users" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12096) dtest failure in consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users
[ https://issues.apache.org/jira/browse/CASSANDRA-12096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson reassigned CASSANDRA-12096: --- Assignee: Philip Thompson (was: DS Test Eng) > dtest failure in > consistency_test.TestAccuracy.test_simple_strategy_each_quorum_users > - > > Key: CASSANDRA-12096 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12096 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, > node4.log, node4_debug.log, node4_gc.log, node5.log, node5_debug.log, > node5_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/407/testReport/consistency_test/TestAccuracy/test_simple_strategy_each_quorum_users > Failed on CassCI build trunk_novnode_dtest #407 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools.py", line 288, in wrapped > f(obj) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 591, in > test_simple_strategy_each_quorum_users > > self._run_test_function_in_parallel(TestAccuracy.Validation.validate_users, > [self.nodes], [self.rf], combinations) > File "/home/automaton/cassandra-dtest/consistency_test.py", line 543, in > _run_test_function_in_parallel > assert False, err.message > 'Error from server: code=2200 [Invalid query] message="unconfigured table > users" > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377102#comment-15377102 ] Philip Thompson commented on CASSANDRA-11465: - So, I would argue we aren't actually introducing "known pending another change", right? That seems to be the state that every failing test is in, where the cause of the failure is a C* limitation? As long as we do this in a way that doesn't accidentally lead to reverting the new coverage and forgetting to restore it, I don't feel too strongly. > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9054) Break DatabaseDescriptor up into multiple classes.
[ https://issues.apache.org/jira/browse/CASSANDRA-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Blake Eggleston updated CASSANDRA-9054: --- Reviewer: Blake Eggleston (was: Aleksey Yeschenko) > Break DatabaseDescriptor up into multiple classes. > -- > > Key: CASSANDRA-9054 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9054 > Project: Cassandra > Issue Type: Improvement >Reporter: Jeremiah Jordan >Assignee: Robert Stupp > Fix For: 3.x > > > Right now to get at Config stuff you go through DatabaseDescriptor. But when > you instantiate DatabaseDescriptor it actually opens system tables and such, > which triggers commit log replays, and other things if the right flags aren't > set ahead of time. This makes getting at config stuff from tools annoying, > as you have to be very careful about instantiation orders. > It would be nice if we could break DatabaseDescriptor up into multiple > classes, so that getting at config stuff from tools wasn't such a pain. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator
[ https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377074#comment-15377074 ] Ariel Weisberg commented on CASSANDRA-9318: --- I incorrectly thought request threads block until all responses are received. They only block until enough responses to satisfy the consistency level are received. So running out of request threads is not always going to be an issue because you have enough slow nodes involved in the request. So rate limiting will work in that it can artificially increase your CL (not really but still) to reduce throughput and avoid timeouts. This will also have the effect of preventing the coordinator from using more memory because request threads can't meet their CL and move on to process new requests. Instead they will block in a rate limiter. My measurements showed that you can provision enough memory to let the timeout kick in so I am not sure that is a useful behavior. Sure it eliminates timeouts, but if that is the end goal maybe we need a consistency level that is something like CL.ALL but tolerates unavailable nodes. That would have the same effect without rate limiting. It's still a partial solution because you can't write at full speed to ranges that don't contain slow nodes. > Bound the number of in-flight requests at the coordinator > - > > Key: CASSANDRA-9318 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9318 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths, Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Sergio Bossa > Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, > limit.btm, no_backpressure.png > > > It's possible to somewhat bound the amount of load accepted into the cluster > by bounding the number of in-flight requests and request bytes. > An implementation might do something like track the number of outstanding > bytes and requests and if it reaches a high watermark disable read on client > connections until it goes back below some low watermark. > Need to make sure that disabling read on the client connection won't > introduce other issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12181) Include table name in "Cannot get comparator" exception
[ https://issues.apache.org/jira/browse/CASSANDRA-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12181: Reviewer: Robert Stupp > Include table name in "Cannot get comparator" exception > --- > > Key: CASSANDRA-12181 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12181 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: sankalp kohli >Priority: Trivial > Attachments: CASSANDRA-12181_3.0.txt > > > Having table name will help in debugging the following exception. > ERROR [MutationStage:xx] CassandraDaemon.java (line 199) Exception in thread > Thread[MutationStage:3788,5,main] > clusterName=itms8shared20 > java.lang.RuntimeException: Cannot get comparator 2 in > org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type). > > This might be due to a mismatch between the schema and the data read -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12189) $$ escaped string literals are not handled correctly in cqlsh
[ https://issues.apache.org/jira/browse/CASSANDRA-12189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12189: Reviewer: Alex Petrov > $$ escaped string literals are not handled correctly in cqlsh > - > > Key: CASSANDRA-12189 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12189 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Mike Adamson >Assignee: Mike Adamson > Fix For: 3.x > > > The syntax rules for pg ($$) escaped string literals in cqlsh do not match > the lexer rule for this type in Lexer.g. > The {{unclosedPgString}} rule is not correctly matching pg string literals in > multi-line statements so: > {noformat} > INSERT INTO test.test (id) values ( > ...$$ > {noformat} > fails with a syntax error at the forward slash. > Both {{pgStringLiteral}} and {{unclosedPgString}} fail with the following > string > {noformat} > $$a$b$$ > {noformat} > where this is allowed by the CQL lexer rule. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[3/4] cassandra git commit: Fix potential deadlock in CDC state tracking
Fix potential deadlock in CDC state tracking Patch by jmckenzie; reviewed by cyeksigian for CASSANDRA-12198 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/90afc58d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/90afc58d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/90afc58d Branch: refs/heads/trunk Commit: 90afc58d3df912c720aff63de0506019b8b9af48 Parents: e3f9b7a Author: Josh McKenzie Authored: Wed Jul 13 18:30:40 2016 -0400 Committer: Josh McKenzie Committed: Thu Jul 14 10:36:43 2016 -0400 -- src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 3 ++- .../cassandra/db/commitlog/CommitLogSegmentManagerCDC.java | 4 ++-- 2 files changed, 4 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/90afc58d/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java index 2e97fd5..a1158be 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java @@ -61,6 +61,7 @@ public abstract class CommitLogSegment FORBIDDEN, CONTAINS } +Object cdcStateLock = new Object(); private final static AtomicInteger nextId = new AtomicInteger(1); private static long replayLimitId; @@ -614,7 +615,7 @@ public abstract class CommitLogSegment return; // Also synchronized in CDCSizeTracker.processNewSegment and .processDiscardedSegment -synchronized(this) +synchronized(cdcStateLock) { if (cdcState == CDCState.CONTAINS && newState != CDCState.CONTAINS) throw new IllegalArgumentException("Cannot transition from CONTAINS to any other state."); http://git-wip-us.apache.org/repos/asf/cassandra/blob/90afc58d/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java index 5c6fd3f..04beb20 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java @@ -187,7 +187,7 @@ public class CommitLogSegmentManagerCDC extends AbstractCommitLogSegmentManager void processNewSegment(CommitLogSegment segment) { // See synchronization in CommitLogSegment.setCDCState -synchronized(segment) +synchronized(segment.cdcStateLock) { segment.setCDCState(defaultSegmentSize() + totalCDCSizeOnDisk() > allowableCDCBytes() ? CDCState.FORBIDDEN @@ -203,7 +203,7 @@ public class CommitLogSegmentManagerCDC extends AbstractCommitLogSegmentManager void processDiscardedSegment(CommitLogSegment segment) { // See synchronization in CommitLogSegment.setCDCState -synchronized(segment) +synchronized(segment.cdcStateLock) { // Add to flushed size before decrementing unflushed so we don't have a window of false generosity if (segment.getCDCState() == CDCState.CONTAINS)
[2/4] cassandra git commit: Fix potential deadlock in CDC state tracking
Fix potential deadlock in CDC state tracking Patch by jmckenzie; reviewed by cyeksigian for CASSANDRA-12198 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/90afc58d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/90afc58d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/90afc58d Branch: refs/heads/cassandra-3.9 Commit: 90afc58d3df912c720aff63de0506019b8b9af48 Parents: e3f9b7a Author: Josh McKenzie Authored: Wed Jul 13 18:30:40 2016 -0400 Committer: Josh McKenzie Committed: Thu Jul 14 10:36:43 2016 -0400 -- src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 3 ++- .../cassandra/db/commitlog/CommitLogSegmentManagerCDC.java | 4 ++-- 2 files changed, 4 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/90afc58d/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java index 2e97fd5..a1158be 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java @@ -61,6 +61,7 @@ public abstract class CommitLogSegment FORBIDDEN, CONTAINS } +Object cdcStateLock = new Object(); private final static AtomicInteger nextId = new AtomicInteger(1); private static long replayLimitId; @@ -614,7 +615,7 @@ public abstract class CommitLogSegment return; // Also synchronized in CDCSizeTracker.processNewSegment and .processDiscardedSegment -synchronized(this) +synchronized(cdcStateLock) { if (cdcState == CDCState.CONTAINS && newState != CDCState.CONTAINS) throw new IllegalArgumentException("Cannot transition from CONTAINS to any other state."); http://git-wip-us.apache.org/repos/asf/cassandra/blob/90afc58d/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java index 5c6fd3f..04beb20 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java @@ -187,7 +187,7 @@ public class CommitLogSegmentManagerCDC extends AbstractCommitLogSegmentManager void processNewSegment(CommitLogSegment segment) { // See synchronization in CommitLogSegment.setCDCState -synchronized(segment) +synchronized(segment.cdcStateLock) { segment.setCDCState(defaultSegmentSize() + totalCDCSizeOnDisk() > allowableCDCBytes() ? CDCState.FORBIDDEN @@ -203,7 +203,7 @@ public class CommitLogSegmentManagerCDC extends AbstractCommitLogSegmentManager void processDiscardedSegment(CommitLogSegment segment) { // See synchronization in CommitLogSegment.setCDCState -synchronized(segment) +synchronized(segment.cdcStateLock) { // Add to flushed size before decrementing unflushed so we don't have a window of false generosity if (segment.getCDCState() == CDCState.CONTAINS)
[jira] [Updated] (CASSANDRA-12202) LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12202: Reviewer: Yuki Morishita > LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0 > --- > > Key: CASSANDRA-12202 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12202 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.x, 2.2.x, 3.0.x > > > We actually fixed this for 3.7+ in CASSANDRA-11657, need to backport that fix > to 2.1+ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12180) Should be able to override compaction space check
[ https://issues.apache.org/jira/browse/CASSANDRA-12180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12180: Reviewer: Marcus Eriksson > Should be able to override compaction space check > - > > Key: CASSANDRA-12180 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12180 > Project: Cassandra > Issue Type: Improvement >Reporter: sankalp kohli >Assignee: sankalp kohli >Priority: Trivial > Attachments: CASSANDRA-12180_3.0.txt > > > If there's not enough space for a compaction it won't do it and print the > exception below. Sometimes we know compaction will free up lot of space since > an ETL job could have inserted a lot of deletes. This override helps in this > case. > ERROR [CompactionExecutor:17] CassandraDaemon.java (line 258) Exception in > thread Thread > [CompactionExecutor:17,1,main] > java.lang.RuntimeException: Not enough space for compaction, estimated > sstables = 1552, expected > write size = 260540558535 > at org.apache.cassandra.db.compaction.CompactionTask.checkAvailableDiskSpace > (CompactionTask.java:306) > at > org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask. > java:106) > at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > at > org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask. > java:60) > at > org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask. > java:59) > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run > (CompactionManager.java:198) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12198) Deadlock in CDC during segment flush
[ https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12198: Status: Ready to Commit (was: Patch Available) > Deadlock in CDC during segment flush > > > Key: CASSANDRA-12198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12198 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Blocker > Fix For: 3.8 > > > In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block > inside CommitLogSegment.setCDCState. This introduces the possibility of > deadlock in the following scenario: > # A {{CommitLogSegment.sync()}} call is made (synchronized method) > # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight > and acquires a reference to the Group on appendOrder (the OpOrder in the > Segment) > # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls > {{appendOrder.awaitNewBarrier}} > # The in-flight write, if changing the state of the segment from > CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on > synchronized(this) > And neither of them ever come back. This came up while doing some further > work on CASSANDRA-12148. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12198) Deadlock in CDC during segment flush
[ https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12198: Resolution: Fixed Status: Resolved (was: Ready to Commit) > Deadlock in CDC during segment flush > > > Key: CASSANDRA-12198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12198 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Blocker > Fix For: 3.8 > > > In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block > inside CommitLogSegment.setCDCState. This introduces the possibility of > deadlock in the following scenario: > # A {{CommitLogSegment.sync()}} call is made (synchronized method) > # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight > and acquires a reference to the Group on appendOrder (the OpOrder in the > Segment) > # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls > {{appendOrder.awaitNewBarrier}} > # The in-flight write, if changing the state of the segment from > CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on > synchronized(this) > And neither of them ever come back. This came up while doing some further > work on CASSANDRA-12148. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12198) Deadlock in CDC during segment flush
[ https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377031#comment-15377031 ] Joshua McKenzie commented on CASSANDRA-12198: - CI looks good - failures aren't related to this change. Committed to 3.8 branch, also to 3.9 then merged to trunk. > Deadlock in CDC during segment flush > > > Key: CASSANDRA-12198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12198 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Blocker > Fix For: 3.8 > > > In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block > inside CommitLogSegment.setCDCState. This introduces the possibility of > deadlock in the following scenario: > # A {{CommitLogSegment.sync()}} call is made (synchronized method) > # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight > and acquires a reference to the Group on appendOrder (the OpOrder in the > Segment) > # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls > {{appendOrder.awaitNewBarrier}} > # The in-flight write, if changing the state of the segment from > CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on > synchronized(this) > And neither of them ever come back. This came up while doing some further > work on CASSANDRA-12148. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[4/4] cassandra git commit: Merge branch 'cassandra-3.9' into trunk
Merge branch 'cassandra-3.9' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/35fbd7bc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/35fbd7bc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/35fbd7bc Branch: refs/heads/trunk Commit: 35fbd7bc5cdf81fc72b55a7c782a91ed509ad076 Parents: 6d8a6bd 90afc58 Author: Josh McKenzie Authored: Thu Jul 14 10:36:57 2016 -0400 Committer: Josh McKenzie Committed: Thu Jul 14 10:36:57 2016 -0400 -- src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 3 ++- .../cassandra/db/commitlog/CommitLogSegmentManagerCDC.java | 4 ++-- 2 files changed, 4 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/35fbd7bc/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java --
[1/4] cassandra git commit: Fix potential deadlock in CDC state tracking
Repository: cassandra Updated Branches: refs/heads/cassandra-3.8 5578d3c7b -> 9ae286b2e refs/heads/cassandra-3.9 e3f9b7a3b -> 90afc58d3 refs/heads/trunk 6d8a6bdca -> 35fbd7bc5 Fix potential deadlock in CDC state tracking Patch by jmckenzie; reviewed by cyeksigian for CASSANDRA-12198 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9ae286b2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9ae286b2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9ae286b2 Branch: refs/heads/cassandra-3.8 Commit: 9ae286b2eea2d10ff7736b1f0e66700176d0849b Parents: 5578d3c Author: Josh McKenzie Authored: Wed Jul 13 18:30:40 2016 -0400 Committer: Josh McKenzie Committed: Thu Jul 14 10:35:59 2016 -0400 -- src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java | 3 ++- .../cassandra/db/commitlog/CommitLogSegmentManagerCDC.java | 4 ++-- 2 files changed, 4 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9ae286b2/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java index 2e97fd5..a1158be 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegment.java @@ -61,6 +61,7 @@ public abstract class CommitLogSegment FORBIDDEN, CONTAINS } +Object cdcStateLock = new Object(); private final static AtomicInteger nextId = new AtomicInteger(1); private static long replayLimitId; @@ -614,7 +615,7 @@ public abstract class CommitLogSegment return; // Also synchronized in CDCSizeTracker.processNewSegment and .processDiscardedSegment -synchronized(this) +synchronized(cdcStateLock) { if (cdcState == CDCState.CONTAINS && newState != CDCState.CONTAINS) throw new IllegalArgumentException("Cannot transition from CONTAINS to any other state."); http://git-wip-us.apache.org/repos/asf/cassandra/blob/9ae286b2/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java index 5c6fd3f..04beb20 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogSegmentManagerCDC.java @@ -187,7 +187,7 @@ public class CommitLogSegmentManagerCDC extends AbstractCommitLogSegmentManager void processNewSegment(CommitLogSegment segment) { // See synchronization in CommitLogSegment.setCDCState -synchronized(segment) +synchronized(segment.cdcStateLock) { segment.setCDCState(defaultSegmentSize() + totalCDCSizeOnDisk() > allowableCDCBytes() ? CDCState.FORBIDDEN @@ -203,7 +203,7 @@ public class CommitLogSegmentManagerCDC extends AbstractCommitLogSegmentManager void processDiscardedSegment(CommitLogSegment segment) { // See synchronization in CommitLogSegment.setCDCState -synchronized(segment) +synchronized(segment.cdcStateLock) { // Add to flushed size before decrementing unflushed so we don't have a window of false generosity if (segment.getCDCState() == CDCState.CONTAINS)
[jira] [Comment Edited] (CASSANDRA-12202) LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376990#comment-15376990 ] Marcus Eriksson edited comment on CASSANDRA-12202 at 7/14/16 2:18 PM: -- ||branch||testall|| |[marcuse/12202|https://github.com/krummas/cassandra/tree/marcuse/12202]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-testall]| |[marcuse/12202-2.2|https://github.com/krummas/cassandra/tree/marcuse/12202-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-2.2-testall]| |[marcuse/12202-3.0|https://github.com/krummas/cassandra/tree/marcuse/12202-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-3.0-testall]| was (Author: krummas): ||branch||testall||dtest|| |[marcuse/12202|https://github.com/krummas/cassandra/tree/marcuse/12202]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-dtest]| |[marcuse/12202-2.2|https://github.com/krummas/cassandra/tree/marcuse/12202-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-2.2-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-2.2-dtest]| |[marcuse/12202-3.0|https://github.com/krummas/cassandra/tree/marcuse/12202-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-3.0-dtest]| > LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0 > --- > > Key: CASSANDRA-12202 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12202 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.x, 2.2.x, 3.0.x > > > We actually fixed this for 3.7+ in CASSANDRA-11657, need to backport that fix > to 2.1+ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12202) LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-12202: Status: Patch Available (was: Open) ||branch||testall||dtest|| |[marcuse/12202|https://github.com/krummas/cassandra/tree/marcuse/12202]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-dtest]| |[marcuse/12202-2.2|https://github.com/krummas/cassandra/tree/marcuse/12202-2.2]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-2.2-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-2.2-dtest]| |[marcuse/12202-3.0|https://github.com/krummas/cassandra/tree/marcuse/12202-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-3.0-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/krummas/job/krummas-marcuse-12202-3.0-dtest]| > LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0 > --- > > Key: CASSANDRA-12202 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12202 > Project: Cassandra > Issue Type: Bug >Reporter: Marcus Eriksson >Assignee: Marcus Eriksson > Fix For: 2.1.x, 2.2.x, 3.0.x > > > We actually fixed this for 3.7+ in CASSANDRA-11657, need to backport that fix > to 2.1+ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)
[ https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman S. Borschel updated CASSANDRA-12203: -- Description: After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family (using SizeTieredCompaction) repeatedly and continuously failed compaction (and thus also repair) across the cluster, with all nodes producing the following errors in the logs: {noformat} 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread Thread[CompactionExecutor:3,1,main] 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null 2016-07-14T09:29:47.96859 |srv=cassandra| at org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96860 |srv=cassandra| at org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96860 |srv=cassandra| at org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96860 |srv=cassandra| at org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96861 |srv=cassandra| at org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96861 |srv=cassandra| at org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96862 |srv=cassandra| at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96862 |srv=cassandra| at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96863 |srv=cassandra| at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96863 |srv=cassandra| at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96864 |srv=cassandra| at org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96864 |srv=cassandra| at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96865 |srv=cassandra| at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96865 |srv=cassandra| at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96866 |srv=cassandra| at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96866 |srv=cassandra| at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96867 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96867 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96867 |srv=cassandra| at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96868 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96868 |srv=cassandra| at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96869 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96870 |srv=cassandra| at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_91] 2016-07-14T09:29:47.96870 |srv=cassandra| at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_91] 2016-07-14T09:29:47.9687
[jira] [Created] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)
Roman S. Borschel created CASSANDRA-12203: - Summary: AssertionError on compaction after upgrade (2.1.9 -> 3.7) Key: CASSANDRA-12203 URL: https://issues.apache.org/jira/browse/CASSANDRA-12203 Project: Cassandra Issue Type: Bug Components: Compaction Environment: Cassandra 3.7 (upgrade from 2.1.9) Java version "1.8.0_91" Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-83-generic x86_64) Reporter: Roman S. Borschel After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family (using SizeTieredCompaction) repeatedly and continuously fails compaction (and thus also repair) across the cluster, with all nodes producing the following errors in the logs: {noformat} 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread Thread[CompactionExecutor:3,1,main] 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null 2016-07-14T09:29:47.96859 |srv=cassandra| at org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96860 |srv=cassandra| at org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96860 |srv=cassandra| at org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96860 |srv=cassandra| at org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96861 |srv=cassandra| at org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96861 |srv=cassandra| at org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96862 |srv=cassandra| at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96862 |srv=cassandra| at org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96863 |srv=cassandra| at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96863 |srv=cassandra| at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96864 |srv=cassandra| at org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96864 |srv=cassandra| at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96865 |srv=cassandra| at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96865 |srv=cassandra| at org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96866 |srv=cassandra| at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96866 |srv=cassandra| at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96867 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96867 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96867 |srv=cassandra| at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96868 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96868 |srv=cassandra| at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) ~[apache-cassandra-3.7.jar:3.7] 2016-07-14T09:29:47.96869 |srv=cassandra| at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:2
[jira] [Commented] (CASSANDRA-9318) Bound the number of in-flight requests at the coordinator
[ https://issues.apache.org/jira/browse/CASSANDRA-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376955#comment-15376955 ] Ariel Weisberg commented on CASSANDRA-9318: --- There really isn't much memory to play with when deciding when to backpressure. There are 128 requests threads and once those are all consumed by a slow node, which doesn't take long in a small cluster, things stall completely. If things were async you might be able to commit enough memory that requests time out before you need to stall. In other words you can shed via timeouts to nodes and no additional mechanisms are needed. Not reading from clients doesn't address the issue. You have still created a situation in which nodes that are performing well can't make progress because they can no longer read requests from clients because of one slow node. Not reading from clients is the current implementation. Hinting as it works now doesn't address the issue because the slow node may never actually catch up or become faster. Waiting for every request that is going to time out to time out and be hinted is going to restrict the coordinators ability to coordinate. Hinting also doesn't work because there are only 128 concurrent requests that can be in the process of being hinted see paragraph #1. If the coordinator wants to continue to make progress it has to read requests from clients and then quickly know if it should shed them. We could shed them silently in which case the upstream client is going to time out and it's going to exhaust it's memory or threadpool and we have silently and unfixably moved the problem upstream. I suppose clients can try and implement their own health metrics to duplicate the work we are doing at the coordinator, but it still can't force the coordinator to shed so the client can replaced those requests that won't succeed with ones that will. Or we can signal that we aren't going to do that request at this time and the client can engage whatever mitigation strategy it wants to implement. There is a whole separate discussion about what the state of the art needs to be in client drivers to do something useful with this information and how to expose the mechanism and policy choices to applications. Rate limiting isn't really useful. You just end up with all the request threads stuck in the rate limiter and coordinators continue to not make progress. Rate limiting doesn't solve a load issue at the remote end because as I've demonstrated the remote end can buffer up enough requests until shedding kicks in due to the timeout and reduces memory utilization to something the heap can handle. If things were async what would rate limiting look like? Would it be disabling read for clients? How is the coordinator going to make progress then if it can't coordinate requests for healthy nodes? > Bound the number of in-flight requests at the coordinator > - > > Key: CASSANDRA-9318 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9318 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths, Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Sergio Bossa > Attachments: 9318-3.0-nits-trailing-spaces.patch, backpressure.png, > limit.btm, no_backpressure.png > > > It's possible to somewhat bound the amount of load accepted into the cluster > by bounding the number of in-flight requests and request bytes. > An implementation might do something like track the number of outstanding > bytes and requests and if it reaches a high watermark disable read on client > connections until it goes back below some low watermark. > Need to make sure that disabling read on the client connection won't > introduce other issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376884#comment-15376884 ] Alex Petrov edited comment on CASSANDRA-12193 at 7/14/16 1:29 PM: -- So far could narrow it down to {{COMPACT STORAGE}}, doesn't happen without it. The rows should be written in the following format: {code} [f47ac10b-58cc-4372-a567-0e02b2c3d479]@0 Row[info=[ts=-9223372036854775808] ]: STATIC | [age=33 ts=1468502633286827], [firstname=Samwise ts=1468502633286827], [lastname=Gamgee ts=1468502633286827] [550e8400-e29b-41d4-a716-44665544]@72 Row[info=[ts=-9223372036854775808] ]: STATIC | [age=32 ts=1468502622494928], [firstname=Frodo ts=1468502622494928], [lastname=Baggins ts=1468502622494928] {code} however they're written as: {code} [f47ac10b-58cc-4372-a567-0e02b2c3d479]@0 Row[info=[ts=-9223372036854775808] ]: STATIC | [age=33 ts=1468486730953039], [firstname=Samwise ts=1468486730953039], [lastname=Gamgee ts=1468486730953039] [f47ac10b-58cc-4372-a567-0e02b2c3d479]@0 Row[info=[ts=-9223372036854775808] ]: age | [value=0021 ts=1468486730953039] [f47ac10b-58cc-4372-a567-0e02b2c3d479]@78 Row[info=[ts=-9223372036854775808] ]: firstname | [value=53616d77697365 ts=1468486730953039] [f47ac10b-58cc-4372-a567-0e02b2c3d479]@103 Row[info=[ts=-9223372036854775808] ]: lastname | [value=47616d676565 ts=1468486730953039] [550e8400-e29b-41d4-a716-44665544]@127 Row[info=[ts=-9223372036854775808] ]: STATIC | [age=32 ts=1468486730937516], [firstname=Frodo ts=1468486730937516], [lastname=Baggins ts=1468486730937516] [550e8400-e29b-41d4-a716-44665544]@127 Row[info=[ts=-9223372036854775808] ]: age | [value=0020 ts=1468486730937516] [550e8400-e29b-41d4-a716-44665544]@200 Row[info=[ts=-9223372036854775808] ]: firstname | [value=46726f646f ts=1468486730937516] [550e8400-e29b-41d4-a716-44665544]@222 Row[info=[ts=-9223372036854775808] ]: lastname | [value=42616767696e73 ts=1468486730937516] {code} was (Author: ifesdjeen): So far could narrow it down to {{COMPACT STORAGE}}, doesn't happen without it. > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test > -- > > Key: CASSANDRA-12193 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12193 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test > Failed on CassCI build upgrade_tests-all #59 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 146, in noncomposite_static_cf_test > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]) > File "/home/automaton/cassandra-dtest/assertions.py", line 162, in > assert_all > assert list_res == expected, "Expected {} from {}, but got > {}".format(expected, query, list_res) > "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', > 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', > 'Baggins']] from SELECT * FROM users, but got > [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']] > {code} > Related failure: > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12193) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376884#comment-15376884 ] Alex Petrov commented on CASSANDRA-12193: - So far could narrow it down to {{COMPACT STORAGE}}, doesn't happen without it. > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x.noncomposite_static_cf_test > -- > > Key: CASSANDRA-12193 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12193 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_indev_3_0_x/noncomposite_static_cf_test > Failed on CassCI build upgrade_tests-all #59 > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 146, in noncomposite_static_cf_test > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']]) > File "/home/automaton/cassandra-dtest/assertions.py", line 162, in > assert_all > assert list_res == expected, "Expected {} from {}, but got > {}".format(expected, query, list_res) > "Expected [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', > 'Gamgee'], [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', > 'Baggins']] from SELECT * FROM users, but got > [[UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('f47ac10b-58cc-4372-a567-0e02b2c3d479'), 33, 'Samwise', 'Gamgee'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins'], > [UUID('550e8400-e29b-41d4-a716-44665544'), 32, 'Frodo', 'Baggins']] > {code} > Related failure: > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_head_trunk/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes2RF1_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_2_x_To_indev_3_0_x/noncomposite_static_cf_test/ > http://cassci.datastax.com/job/upgrade_tests-all/59/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_2_1_x_To_head_trunk/noncomposite_static_cf_test/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12202) LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0
Marcus Eriksson created CASSANDRA-12202: --- Summary: LongLeveledCompactionStrategyTest flapping in 2.1, 2.2, 3.0 Key: CASSANDRA-12202 URL: https://issues.apache.org/jira/browse/CASSANDRA-12202 Project: Cassandra Issue Type: Bug Reporter: Marcus Eriksson Assignee: Marcus Eriksson Fix For: 2.1.x, 2.2.x, 3.0.x We actually fixed this for 3.7+ in CASSANDRA-11657, need to backport that fix to 2.1+ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12198) Deadlock in CDC during segment flush
[ https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376846#comment-15376846 ] Carl Yeksigian commented on CASSANDRA-12198: Updated branch looks good; kicked off new CI tests. +1 once cassci is happy > Deadlock in CDC during segment flush > > > Key: CASSANDRA-12198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12198 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Blocker > Fix For: 3.8 > > > In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block > inside CommitLogSegment.setCDCState. This introduces the possibility of > deadlock in the following scenario: > # A {{CommitLogSegment.sync()}} call is made (synchronized method) > # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight > and acquires a reference to the Group on appendOrder (the OpOrder in the > Segment) > # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls > {{appendOrder.awaitNewBarrier}} > # The in-flight write, if changing the state of the segment from > CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on > synchronized(this) > And neither of them ever come back. This came up while doing some further > work on CASSANDRA-12148. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12198) Deadlock in CDC during segment flush
[ https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376837#comment-15376837 ] Joshua McKenzie commented on CASSANDRA-12198: - Updated branch to synchronize on a discrete Object rather than the cdcState enum. Apparently synchronizing on an enum in java means synchronization on the underlying Singleton object for the value which is clearly not what we want here. > Deadlock in CDC during segment flush > > > Key: CASSANDRA-12198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12198 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Blocker > Fix For: 3.8 > > > In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block > inside CommitLogSegment.setCDCState. This introduces the possibility of > deadlock in the following scenario: > # A {{CommitLogSegment.sync()}} call is made (synchronized method) > # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight > and acquires a reference to the Group on appendOrder (the OpOrder in the > Segment) > # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls > {{appendOrder.awaitNewBarrier}} > # The in-flight write, if changing the state of the segment from > CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on > synchronized(this) > And neither of them ever come back. This came up while doing some further > work on CASSANDRA-12148. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12198) Deadlock in CDC during segment flush
[ https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12198: Reviewer: Carl Yeksigian (was: Branimir Lambov) > Deadlock in CDC during segment flush > > > Key: CASSANDRA-12198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12198 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Blocker > Fix For: 3.8 > > > In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block > inside CommitLogSegment.setCDCState. This introduces the possibility of > deadlock in the following scenario: > # A {{CommitLogSegment.sync()}} call is made (synchronized method) > # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight > and acquires a reference to the Group on appendOrder (the OpOrder in the > Segment) > # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls > {{appendOrder.awaitNewBarrier}} > # The in-flight write, if changing the state of the segment from > CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on > synchronized(this) > And neither of them ever come back. This came up while doing some further > work on CASSANDRA-12148. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12198) Deadlock in CDC during segment flush
[ https://issues.apache.org/jira/browse/CASSANDRA-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376818#comment-15376818 ] Joshua McKenzie commented on CASSANDRA-12198: - [~carlyeks] to review since Branimir is out. > Deadlock in CDC during segment flush > > > Key: CASSANDRA-12198 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12198 > Project: Cassandra > Issue Type: Bug >Reporter: Joshua McKenzie >Assignee: Joshua McKenzie >Priority: Blocker > Fix For: 3.8 > > > In the patch for CASSANDRA-8844, we added a {{synchronized(this)}} block > inside CommitLogSegment.setCDCState. This introduces the possibility of > deadlock in the following scenario: > # A {{CommitLogSegment.sync()}} call is made (synchronized method) > # A {{CommitLogSegment.allocate}} call from a cdc-enabled write is in flight > and acquires a reference to the Group on appendOrder (the OpOrder in the > Segment) > # {{CommmitLogSegment.sync}} hits {{waitForModifications}} which calls > {{appendOrder.awaitNewBarrier}} > # The in-flight write, if changing the state of the segment from > CDCState.PERMITTED to CDCState.CONTAINS, enters {{setCDCState}} and blocks on > synchronized(this) > And neither of them ever come back. This came up while doing some further > work on CASSANDRA-12148. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15376814#comment-15376814 ] Joshua McKenzie commented on CASSANDRA-11465: - I'd say revert and then re-commit w/11850. That way we can continue forward with just "working | flaky" as our two test states rather than potentially introducing "known pending another change". Seem reasonable/ > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11713) Add ability to log thread dump when NTR pool is blocked
[ https://issues.apache.org/jira/browse/CASSANDRA-11713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-11713: Status: Patch Available (was: Open) > Add ability to log thread dump when NTR pool is blocked > --- > > Key: CASSANDRA-11713 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11713 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Attachments: ThreadDumper.png > > > Thread dumps are very useful for troubleshooting Native-Transport-Requests > contention issues like CASSANDRA-11363 and CASSANDRA-11529. > While they could be generated externally with {{jstack}}, sometimes the > conditions are transient and it's hard to catch the exact moment when they > happen, so it could be useful to generate and log them upon user request when > certain internal condition happens. > I propose adding a {{logThreadDumpOnNextContention}} flag to {{SEPExecutor}} > that when enabled via JMX generates and logs a single thread dump on the > system log when the thread pool queue is full. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12107) Fix range scans for table with live static rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-12107: --- Fix Version/s: 3.9 > Fix range scans for table with live static rows > --- > > Key: CASSANDRA-12107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12107 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sharvanath Pathak > Fix For: 3.0.9, 3.9 > > Attachments: repro > > > We were seeing some weird behaviour with limit based scan queries. In > particular, we see the following: > {noformat} > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > 6b470c3e43ee06d1 | -9218823070349964862 > 484b091ca97803cd | -8954822859271125729 > (2 rows) > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > c348aaec2f1e4b85 | -9218781105444826588 > {noformat} > In the table uuid is partition key, and it has a clustering key as well. > So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. > After some investigation, it seems to me like the issue is in the way > DataLimits handles static rows. Here is a patch for trunk > (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc) > which seems to fix it for me. Please take a look, seems like a pretty > critical issue to me. > I have forked the dtests for it as well. However, since trunk has some > failures already, I'm not fully sure how to infer the results. > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/ > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12107) Fix range scans for table with live static rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-12107: --- Component/s: (was: Core) CQL > Fix range scans for table with live static rows > --- > > Key: CASSANDRA-12107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12107 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Sharvanath Pathak > Fix For: 3.0.9, 3.9 > > Attachments: repro > > > We were seeing some weird behaviour with limit based scan queries. In > particular, we see the following: > {noformat} > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > 6b470c3e43ee06d1 | -9218823070349964862 > 484b091ca97803cd | -8954822859271125729 > (2 rows) > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > c348aaec2f1e4b85 | -9218781105444826588 > {noformat} > In the table uuid is partition key, and it has a clustering key as well. > So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. > After some investigation, it seems to me like the issue is in the way > DataLimits handles static rows. Here is a patch for trunk > (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc) > which seems to fix it for me. Please take a look, seems like a pretty > critical issue to me. > I have forked the dtests for it as well. However, since trunk has some > failures already, I'm not fully sure how to infer the results. > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/ > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12107) Fix range scans for table with live static rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-12107: --- Labels: (was: patch-available) > Fix range scans for table with live static rows > --- > > Key: CASSANDRA-12107 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12107 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Sharvanath Pathak > Fix For: 3.0.9 > > Attachments: repro > > > We were seeing some weird behaviour with limit based scan queries. In > particular, we see the following: > {noformat} > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) >= token('6b470c3e43ee06d1') limit 2" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > 6b470c3e43ee06d1 | -9218823070349964862 > 484b091ca97803cd | -8954822859271125729 > (2 rows) > $ cqlsh -k sd -e "consistency local_quorum; SELECT uuid, token(uuid) FROM > files WHERE token(uuid) > token('6b470c3e43ee06d1') limit 1" > Consistency level set to LOCAL_QUORUM. > uuid | system.token(uuid) > --+-- > c348aaec2f1e4b85 | -9218781105444826588 > {noformat} > In the table uuid is partition key, and it has a clustering key as well. > So the uuid "c348aaec2f1e4b85" should be the second one in the limit query. > After some investigation, it seems to me like the issue is in the way > DataLimits handles static rows. Here is a patch for trunk > (https://github.com/sharvanath/cassandra/commit/9a460d40e55bd7e3604d987ed4df5c8c2e03ffdc) > which seems to fix it for me. Please take a look, seems like a pretty > critical issue to me. > I have forked the dtests for it as well. However, since trunk has some > failures already, I'm not fully sure how to infer the results. > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-dtest/ > http://cassci.datastax.com/view/Dev/view/sharvanath/job/sharvanath-fixScan-testall/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)