[jira] [Commented] (CASSANDRA-8992) CommitLogTest hangs intermittently
[ https://issues.apache.org/jira/browse/CASSANDRA-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368564#comment-14368564 ] Michael Shuler commented on CASSANDRA-8992: --- I tried starting at commit dd84a29, which was the trunk merge of CASSANDRA-7713, and looping over that commit, the test hung in the same manner on pass #8 and the end of the system.log looks about the same as the attached one. I think we have a test problem. {noformat} ((dd84a29...))mshuler@hana:~/git/cassandra$ tail -f build/test/logs/system.log INFO [main] 2015-03-19 00:54:38,428 Enqueuing flush of local: 200 (0%) on-heap, 0 (0%) off-heap DEBUG [main] 2015-03-19 00:54:38,429 scheduling flush in 360 ms INFO [StorageServiceShutdownHook] 2015-03-19 00:55:33,468 Announcing shutdown INFO [StorageServiceShutdownHook] 2015-03-19 00:55:35,469 Waiting for messaging service to quiesce DEBUG [StorageServiceShutdownHook] 2015-03-19 00:55:35,470 Closing accept() thread DEBUG [ACCEPT-/127.0.0.1] 2015-03-19 00:55:35,470 Asynchronous close seen by server thread INFO [ACCEPT-/127.0.0.1] 2015-03-19 00:55:35,471 MessagingService has terminated the accept() thread DEBUG [BatchlogTasks:1] 2015-03-19 00:55:36,650 Started replayAllFailedBatches DEBUG [ScheduledTasks:1] 2015-03-19 00:55:37,639 Disseminating load info ... DEBUG [ScheduledTasks:1] 2015-03-19 00:56:37,640 Disseminating load info ... DEBUG [ScheduledTasks:1] 2015-03-19 00:57:37,640 Disseminating load info ... DEBUG [ScheduledTasks:1] 2015-03-19 00:58:37,641 Disseminating load info ... {noformat} CommitLogTest hangs intermittently -- Key: CASSANDRA-8992 URL: https://issues.apache.org/jira/browse/CASSANDRA-8992 Project: Cassandra Issue Type: Bug Components: Tests Environment: trunk HEAD (commit 1279009) Reporter: Michael Shuler Fix For: 3.0 Attachments: system.log CommitLogTest hangs about 20% of the time in trunk. I haven't seen this happen in 2.1, yet, but will have to loop over it to be sure. {noformat} 21:26:15 [junit] Testsuite: org.apache.cassandra.db.CommitLogTest 21:46:15 Build timed out (after 20 minutes). Marking the build as aborted. {noformat} I was able to repro locally, looping over the test and have attached the system.log from that repro. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7814) enable describe on indices
[ https://issues.apache.org/jira/browse/CASSANDRA-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368595#comment-14368595 ] Stefania commented on CASSANDRA-7814: - [~jbellis]: \\ \\ * We can add {{DESCRIBE INDEX table_name idx_name}} which will output the index CQL. Is this all that's required? To omit the table name would probably require a change in the python driver as it currently attaches the index metadata to the tables. If we speficy the table name we should be able to only change cqlsh. * Note that {{DESCRIBE TABLE table_name}} already outputs the indexes CQL, at least in 3.0: \\ {code} cqlsh:test DESCRIBE TABLE users; CREATE TABLE test.users ( user_id text PRIMARY KEY, age int, first_name text, last_name text ) WITH bloom_filter_fp_chance = 0.01 AND caching = '{keys:ALL, rows_per_partition:NONE}' AND comment = '' AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32'} AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99.0PERCENTILE'; CREATE INDEX age_idx ON test.users (age); {code} * Are you happy with the fix in 3.0 only or do you want it in 2.1 or 2.0 as well? enable describe on indices -- Key: CASSANDRA-7814 URL: https://issues.apache.org/jira/browse/CASSANDRA-7814 Project: Cassandra Issue Type: Improvement Components: Core Reporter: radha Assignee: Stefania Priority: Minor Fix For: 3.0 Describe index should be supported, right now, the only way is to export the schema and find what it really is before updating/dropping the index. verified in [cqlsh 3.1.8 | Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler reopened CASSANDRA-5791: --- trunk HEAD fails all these new VerifyTest unit tests - can we fix this up? {noformat} [junit] Testsuite: org.apache.cassandra.db.VerifyTest [junit] Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 3.161 sec [junit] [junit] - Standard Output --- [junit] WARN 06:21:48 No host ID found, created 3921d9b6-df80-4a62-95cb-7b4ab506e29b (Note: This should happen exactly once per node). [junit] WARN 06:21:48 No host ID found, created 3921d9b6-df80-4a62-95cb-7b4ab506e29b (Note: This should happen exactly once per node). [junit] - --- [junit] Testcase: testVerifyCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testVerifyCorrect(VerifyTest.java:123) [junit] [junit] [junit] Testcase: testVerifyCounterCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testVerifyCounterCorrect(VerifyTest.java:145) [junit] [junit] [junit] Testcase: testExtendedVerifyCorrect(org.apache.cassandra.db.VerifyTest):FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testExtendedVerifyCorrect(VerifyTest.java:167) [junit] [junit] [junit] Testcase: testExtendedVerifyCounterCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testExtendedVerifyCounterCorrect(VerifyTest.java:189) [junit] [junit] [junit] Test org.apache.cassandra.db.VerifyTest FAILED {noformat} A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli Assignee: Jeff Jirsa Priority: Minor Fix For: 3.0 Attachments: cassandra-5791-patch-3.diff, cassandra-5791.patch-2 CUrrently there is no nodetool command to validate all sstables on disk. The only way to do this is to run a repair and see if it succeeds. But we cannot repair the system keyspace. Also we can run upgrade sstables but that re writes all the sstables. This command should check the hash of all sstables and return whether all data is readable all not. This should NOT care about consistency. The compressed sstables do not have hash so not sure how it will work there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8845) sorted CQLSSTableWriter accept unsorted clustering keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368609#comment-14368609 ] Pierre N. commented on CASSANDRA-8845: -- Yes that's what I'm saying, if it's the desired behavior, just javadoc should be updated and there is no bug. sorted CQLSSTableWriter accept unsorted clustering keys --- Key: CASSANDRA-8845 URL: https://issues.apache.org/jira/browse/CASSANDRA-8845 Project: Cassandra Issue Type: Bug Reporter: Pierre N. Fix For: 2.1.4 Attachments: TestSorted.java The javadoc says : {quote} The SSTable sorted order means that rows are added such that their partition key respect the partitioner order and for a given partition, that *the rows respect the clustering columns order*. public Builder sorted() {quote} It throw an ex when partition key are in incorrect order, however, it doesn't throw an ex when rows are inserted with incorrect clustering keys order. It buffer them and sort them in correct order. {code} writer.addRow(1, 3); writer.addRow(1, 1); writer.addRow(1, 2); {code} {code} $ sstable2json sorted/ks/t1/ks-t1-ka-1-Data.db [ {key: 1, cells: [[\u\u\u\u0001:,,1424524149557000], [\u\u\u\u0002:,,1424524149557000], [\u\u\u\u0003:,,142452414955]]} ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368584#comment-14368584 ] Michael Shuler edited comment on CASSANDRA-5791 at 3/19/15 6:30 AM: trunk HEAD fails 4 of these new VerifyTest unit tests - can we fix this up? {noformat} [junit] Testsuite: org.apache.cassandra.db.VerifyTest [junit] Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 3.161 sec [junit] [junit] - Standard Output --- [junit] WARN 06:21:48 No host ID found, created 3921d9b6-df80-4a62-95cb-7b4ab506e29b (Note: This should happen exactly once per node). [junit] WARN 06:21:48 No host ID found, created 3921d9b6-df80-4a62-95cb-7b4ab506e29b (Note: This should happen exactly once per node). [junit] - --- [junit] Testcase: testVerifyCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testVerifyCorrect(VerifyTest.java:123) [junit] [junit] [junit] Testcase: testVerifyCounterCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testVerifyCounterCorrect(VerifyTest.java:145) [junit] [junit] [junit] Testcase: testExtendedVerifyCorrect(org.apache.cassandra.db.VerifyTest):FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testExtendedVerifyCorrect(VerifyTest.java:167) [junit] [junit] [junit] Testcase: testExtendedVerifyCounterCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testExtendedVerifyCounterCorrect(VerifyTest.java:189) [junit] [junit] [junit] Test org.apache.cassandra.db.VerifyTest FAILED {noformat} was (Author: mshuler): trunk HEAD fails all these new VerifyTest unit tests - can we fix this up? {noformat} [junit] Testsuite: org.apache.cassandra.db.VerifyTest [junit] Tests run: 10, Failures: 4, Errors: 0, Skipped: 0, Time elapsed: 3.161 sec [junit] [junit] - Standard Output --- [junit] WARN 06:21:48 No host ID found, created 3921d9b6-df80-4a62-95cb-7b4ab506e29b (Note: This should happen exactly once per node). [junit] WARN 06:21:48 No host ID found, created 3921d9b6-df80-4a62-95cb-7b4ab506e29b (Note: This should happen exactly once per node). [junit] - --- [junit] Testcase: testVerifyCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testVerifyCorrect(VerifyTest.java:123) [junit] [junit] [junit] Testcase: testVerifyCounterCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testVerifyCounterCorrect(VerifyTest.java:145) [junit] [junit] [junit] Testcase: testExtendedVerifyCorrect(org.apache.cassandra.db.VerifyTest):FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testExtendedVerifyCorrect(VerifyTest.java:167) [junit] [junit] [junit] Testcase: testExtendedVerifyCounterCorrect(org.apache.cassandra.db.VerifyTest): FAILED [junit] Unexpected CorruptSSTableException [junit] junit.framework.AssertionFailedError: Unexpected CorruptSSTableException [junit] at org.apache.cassandra.db.VerifyTest.testExtendedVerifyCounterCorrect(VerifyTest.java:189) [junit] [junit] [junit] Test org.apache.cassandra.db.VerifyTest FAILED {noformat} A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli
[jira] [Commented] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370218#comment-14370218 ] Benedict commented on CASSANDRA-8993: - Huh, you're right, it's CQL rows we track. EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-8988: Priority: Minor (was: Trivial) Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 3.0 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8920) Optimise sequential overlap visitation
[ https://issues.apache.org/jira/browse/CASSANDRA-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-8920: Fix Version/s: (was: 2.1.4) 3.0 Optimise sequential overlap visitation -- Key: CASSANDRA-8920 URL: https://issues.apache.org/jira/browse/CASSANDRA-8920 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 3.0 Attachments: 8920.txt The IntervalTree only maps partition keys. Since a majority of users deploy a hashed partitioner the work is mostly wasted, since they will be evenly distributed across the full token range owned by the node - and in some cases it is a significant amount of work. We can perform a corroboration against the file bounds if we get a BF match as a sanity check if we like, but performing an IntervalTree search is significantly more expensive (esp. once murmur hash calculation memoization goes mainstream). In LCS, the keys are bounded, to it might appear that it would help, but in this scenario we only compact against like bounds, so again it is not helpful. With a ByteOrderedPartitioner it could potentially be of use, but this is sufficiently rare to not optimise for IMO. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-8988: Fix Version/s: (was: 2.1.4) 3.0 Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Trivial Fix For: 3.0 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370394#comment-14370394 ] Benedict commented on CASSANDRA-8988: - There are a few possibilities for implementing the find method efficiently that I wasn't planning to explore just yet, but since you've asked me to justify myself I've gone ahead and uploaded what I think is probably about as efficient as we can manage (a very minor tweak in fact). To justify the rationale of using the switch statement: If I were expecting a compiler to optimise well, I would put money on the switch being faster than a field access (which any enum method would amount to after inlining). This is because, in principle, the location of the enum itself can be substituted in without having to suffer the indirection costs of field access, and there is only one conditional expression to evaluate when the switch statement has only two possible outcomes. A single branch on a value is typically very fast. Since an improved implementation above either approach was really trivial, though, let's go with that. Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 3.0 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5174) expose nodetool scrub for 2Is
[ https://issues.apache.org/jira/browse/CASSANDRA-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370475#comment-14370475 ] Stefania commented on CASSANDRA-5174: - [~yukim] thank you for your review! Here is the latest commit: https://github.com/apache/cassandra/commit/b018430c17cf15915f9b2c2d4dd4e8689cf058b8 bq. I think we should log only when we are rebuilding index, and change log level to WARN, since default exception handler will log in ERROR anyway. Done, however I wanted this message to be logged before or together with the Rebuilding index... message so I passed the {{throwable}} as an argument to {{rebuildOnFailedScrub()}}, see if that's OK. bq. Why don't you handle IllegalArgumentException in its own catch block? Good point, done. bq. dtest also looks good except coding styles (docstring, string format) but I let QA team take a look when you do pull request. This is the pull request, I explicitly asked them to pick up style issues: https://github.com/riptano/cassandra-dtest/pull/200 I have one more question, since I know people can implement custom indexes, are the changes I made to {{SecondaryIndex}} going to cause issues or is this expected in a major release? expose nodetool scrub for 2Is - Key: CASSANDRA-5174 URL: https://issues.apache.org/jira/browse/CASSANDRA-5174 Project: Cassandra Issue Type: Task Components: Core, Tools Reporter: Jason Brown Assignee: Stefania Priority: Minor Fix For: 3.0 Continuation of CASSANDRA-4464, where many other nodetool operations were added for 2Is. This ticket supports scrub fo 2Is and is in its own ticket due to the riskiness of deleting data on a bad bug. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370302#comment-14370302 ] Branimir Lambov commented on CASSANDRA-8988: I had the impression that the extra search can happen for many nodes on the tree, but on closer analysis I don't think that's really possible. I'm happy with your analysis, your version of the code should be more efficient in the worst case as well as on average. +1 to the patch. I'm curious why you aren't using enum methods instead of switches for selecti/result in AsymmetricOrdering. Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 3.0 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370414#comment-14370414 ] Stefania commented on CASSANDRA-7816: - Thanks! Duplicate DOWN/UP Events Pushed with Native Protocol Key: CASSANDRA-7816 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816 Project: Cassandra Issue Type: Bug Components: API Reporter: Michael Penick Assignee: Stefania Priority: Minor Fix For: 2.1.4, 2.0.14 Attachments: 7816-v2.0.txt, tcpdump_repeating_status_change.txt, trunk-7816.txt Added MOVED_NODE as a possible type of topology change and also specified that it is possible to receive the same event multiple times. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8085) Make PasswordAuthenticator number of hashing rounds configurable
[ https://issues.apache.org/jira/browse/CASSANDRA-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-8085: --- Attachment: 8085-2.1.txt 8085-3.0.txt Added a system property to set the number of rounds and some validation to ensure a supplied values falls within jbcrypt's accepted bounds. Make PasswordAuthenticator number of hashing rounds configurable Key: CASSANDRA-8085 URL: https://issues.apache.org/jira/browse/CASSANDRA-8085 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Tyler Hobbs Assignee: Sam Tunnicliffe Fix For: 2.1.4 Attachments: 8085-2.1.txt, 8085-3.0.txt Running 2^10 rounds of bcrypt can take a while. In environments (like PHP) where connections are not typically long-lived, authenticating can add substantial overhead. On IRC, one user saw the time to connect, authenticate, and execute a query jump from 5ms to 150ms with authentication enabled ([debug logs|http://pastebin.com/bSUufbr0]). CASSANDRA-7715 is a more complete fix for this, but in the meantime (and even after 7715), this is a good option. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8357) ArrayOutOfBounds in cassandra-stress with inverted exponential distribution
[ https://issues.apache.org/jira/browse/CASSANDRA-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368673#comment-14368673 ] Jens Preußner commented on CASSANDRA-8357: -- I should still have the code and will try in 2.1.3, just give me some hours ^^ ArrayOutOfBounds in cassandra-stress with inverted exponential distribution --- Key: CASSANDRA-8357 URL: https://issues.apache.org/jira/browse/CASSANDRA-8357 Project: Cassandra Issue Type: Bug Components: Tools Environment: 6-node cassandra cluster (2.1.1) on debian. Reporter: Jens Preußner Fix For: 2.1.4 When using the CQLstress example from GitHub (https://github.com/apache/cassandra/blob/trunk/tools/cqlstress-example.yaml) with an inverted exponential distribution in the insert-partitions field, generated threads fail with Exception in thread Thread-20 java.lang.ArrayIndexOutOfBoundsException: 20 at org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:307) See the gist https://gist.github.com/jenzopr/9edde53122554729c852 for the typetest.yaml I used. The call was: cassandra-stress user profile=typetest.yaml ops\(insert=1\) -node $NODES -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3
[ https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Björn Hachmann updated CASSANDRA-8851: -- Attachment: cassandra.yaml Hi [~benedict], I have just attached a copy of our cassandra.yaml. As we have not set a partitioner on keyspace-level we use org.apache.cassandra.dht.RandomPartitioner. desc keyspace metrigo_prod returns: CREATE KEYSPACE metrigo_prod WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '2'} AND durable_writes = true; Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3 --- Key: CASSANDRA-8851 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851 Project: Cassandra Issue Type: Bug Environment: ubuntu Reporter: Tobias Schlottke Assignee: Benedict Priority: Critical Fix For: 2.1.4 Attachments: cassandra.yaml, schema.txt, system.log.gz Hi there, after upgrading to 2.1.3 we've got the following error every few seconds: {code} WARN [SharedPool-Worker-16] 2015-02-23 10:20:36,392 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-16,5,main]: {} java.lang.AssertionError: null at org.apache.cassandra.io.util.Memory.size(Memory.java:307) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) ~[apache-cassandra-2.1.3.jar:2.1.3] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_45] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.1.3.jar:2.1.3] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45] {code} This seems to crash the compactions and pushes up server load and piles up compactions. Any idea / possible workaround? Best, Tobias -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368828#comment-14368828 ] Sam Tunnicliffe commented on CASSANDRA-8991: Looks reasonable to me, two minor things though: * We don't use braces for single statement conditionals (despite that being EVIL), so we should either remove those or (my preference) use ternary syntax. * To avoid future confusion, it would be good to add a brief comment explaining why the method was added as opposed to simply implementing CFStatement#columnFamily() - i.e because we need columnFamily() to return null, so that SchemaMigrations for dropping an index remain as keyspace rather than table level events. For example, see SchemaAlteringStatement#execute Lastly, it's preferred to suffix attached patches with .txt (I think this is to make some browsers display the patch inline, rather than attempt to download it). CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.patch CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-8991: --- Assignee: Ulises Cervino Beresi CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.patch CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8620) Bootstrap session hanging indefinitely
[ https://issues.apache.org/jira/browse/CASSANDRA-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368779#comment-14368779 ] Adam Horwich commented on CASSANDRA-8620: - Hi, We were suspicious of LCS edge-case bugs we'd seen reported elsewhere so we created new tables with Size Tiered Compaction Strategy and we have not seen the problem since. We have since upgraded to 2.1.3 so we may re-evaluate LCS in the future. Bootstrap session hanging indefinitely -- Key: CASSANDRA-8620 URL: https://issues.apache.org/jira/browse/CASSANDRA-8620 Project: Cassandra Issue Type: Bug Environment: Debian 7, Oracle JDK 1.7.0_51, AWS + GCE Reporter: Adam Horwich Hi! We have been running a relatively small 2.1.2 cluster over 2 DCs for a few months with ~100GB load per node and a RF=3 and over the last few weeks have been trying to scale up capacity. We've been recently seeing scenarios in which the Bootstrap or Unbootstrap streaming process hangs indefinitely for one or more sessions on the receiver without stacktrace or exception. This does not happen every time, and we do not get into this state with the same sender every time. When the receiver is in a hung state, the following can be found in the thread dump: The Stream-IN thread for one or more sessions is blocked in the following state: Thread 24942: (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=186 (Compiled frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() @bci=42, line=2043 (Compiled frame) - java.util.concurrent.ArrayBlockingQueue.take() @bci=20, line=374 (Compiled frame) - org.apache.cassandra.streaming.compress.CompressedInputStream.read() @bci=31, line=89 (Compiled frame) - java.io.DataInputStream.readUnsignedShort() @bci=4, line=337 (Compiled frame) - org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort() @bci=4, line=140 (Compiled frame) - org.apache.cassandra.utils.ByteBufferUtil.readShortLength(java.io.DataInput) @bci=1, line=317 (Compiled frame) - org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(java.io.DataInput) @bci=2, line=327 (Compiled frame) - org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(java.io.DataInput) @bci=5, line=397 (Compiled frame) - org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(java.io.DataInput) @bci=2, line=381 (Compiled frame) - org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(java.io.DataInput, org.apache.cassandra.db.ColumnSerializer$Flag, int, org.apache.cassandra.io.sstable.Descriptor$Version) @bci=10, line=75 (Compiled frame) - org.apache.cassandra.db.AbstractCell$1.computeNext() @bci=25, line=52 (Compiled frame) - org.apache.cassandra.db.AbstractCell$1.computeNext() @bci=1, line=46 (Compiled frame) - com.google.common.collect.AbstractIterator.tryToComputeNext() @bci=9, line=143 (Compiled frame) - com.google.common.collect.AbstractIterator.hasNext() @bci=61, line=138 (Compiled frame) - org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(org.apache.cassandra.db.DecoratedKey, org.apache.cassandra.config.CFMetaData, java.io.DataInput, org.apache.cassandra.io.sstable.Descriptor$Version) @bci=320, line=283 (Compiled frame) - org.apache.cassandra.streaming.StreamReader.writeRow(org.apache.cassandra.io.sstable.SSTableWriter, java.io.DataInput, org.apache.cassandra.db.ColumnFamilyStore) @bci=26, line=157 (Compiled frame) - org.apache.cassandra.streaming.compress.CompressedStreamReader.read(java.nio.channels.ReadableByteChannel) @bci=258, line=89 (Compiled frame) - org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=69, line=48 (Interpreted frame) - org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=4, line=38 (Interpreted frame) - org.apache.cassandra.streaming.messages.StreamMessage.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=37, line=55 (Interpreted frame) - org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run() @bci=24, line=245 (Interpreted frame) - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame) Debug logging shows that the receiver is still reading the file it is receiving from the receiver and has not yet sent an ACK. The receiver is waiting for more data to
[jira] [Commented] (CASSANDRA-8553) Add a key-value payload for third party usage
[ https://issues.apache.org/jira/browse/CASSANDRA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368778#comment-14368778 ] Sylvain Lebresne commented on CASSANDRA-8553: - On v2: * For the methods variant that don't take a {{customPayload}}, I was only thinking of {{QueryProcessor}}, and I don't think adding them to the {{QueryHandler}} makes much sense (it just makes it more annoying to implement that interface). * In {{Message.java}}, we should throw a {{ProtocolException}} rather than a {{RuntimeException}} in the decoding case (could be cleaner in the encoding case too but it matters less). * The goal of using reflection in MessagePayloadTest was to avoid making {{ClientState.cqlQueryHandler}} non final, but the patch still does it. You can remove the final modifier through reflexion just for the test, see [here|http://stackoverflow.com/questions/3301635/change-private-static-final-field-using-java-reflection]. It would also be cleaner imo to change/reset the handler in a {{@BeforeTest}}/{{@AfterTest}} (or {{@BeforeClass}}/{{@AfterClass}}), including moving the currently static parts (setting the field accessible) there and turning the field not accessible again in the {{@AfterTest}} to make it clear the test leave the global state of the JVM as it found it, which is good hygiene). * I believe you'll need to rebase since you've committed the change to {{CQLTester}} separatly. Add a key-value payload for third party usage - Key: CASSANDRA-8553 URL: https://issues.apache.org/jira/browse/CASSANDRA-8553 Project: Cassandra Issue Type: Sub-task Reporter: Sergio Bossa Assignee: Robert Stupp Labels: client-impacting, protocolv4 Fix For: 3.0 Attachments: 8553-v2.txt, 8553.txt An useful improvement would be to include a generic key-value payload, so that developers implementing a custom {{QueryHandler}} could leverage that to move custom data back and forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368867#comment-14368867 ] Benedict commented on CASSANDRA-8988: - I think it will be worth rolling out the binary search to other places as well, eventually: IndexSummary at least wants floor semantics, which would cleanup some mess inside SSTableReader. I've provided a very simple formal proof of correctness, and have based it on work I've done previously that has been used in production systems, but we should probably still delay this change until 3.0 Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Trivial Fix For: 2.1.4 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8851) Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3
[ https://issues.apache.org/jira/browse/CASSANDRA-8851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368894#comment-14368894 ] Benedict commented on CASSANDRA-8851: - [~Hachmann] I've filed CASSANDRA-8993 with what I think is the fix for your problem. I will close this ticket now, since it's a separate issue. Thanks very much for providing all your info quickly and helping us track down the bug! Uncaught exception on thread Thread[SharedPool-Worker-16,5,main] after upgrade to 2.1.3 --- Key: CASSANDRA-8851 URL: https://issues.apache.org/jira/browse/CASSANDRA-8851 Project: Cassandra Issue Type: Bug Environment: ubuntu Reporter: Tobias Schlottke Assignee: Benedict Priority: Critical Fix For: 2.1.4 Attachments: cassandra.yaml, schema.txt, system.log.gz Hi there, after upgrading to 2.1.3 we've got the following error every few seconds: {code} WARN [SharedPool-Worker-16] 2015-02-23 10:20:36,392 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-16,5,main]: {} java.lang.AssertionError: null at org.apache.cassandra.io.util.Memory.size(Memory.java:307) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.utils.obs.OffHeapBitSet.capacity(OffHeapBitSet.java:61) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.utils.BloomFilter.indexes(BloomFilter.java:74) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.utils.BloomFilter.isPresent(BloomFilter.java:98) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1366) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java:1350) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.columniterator.SSTableSliceIterator.init(SSTableSliceIterator.java:41) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:185) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:273) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1915) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1748) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:342) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:57) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.ReadVerbHandler.doVerb(ReadVerbHandler.java:47) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) ~[apache-cassandra-2.1.3.jar:2.1.3] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_45] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.1.3.jar:2.1.3] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45] {code} This seems to crash the compactions and pushes up server load and piles up compactions. Any idea / possible workaround? Best, Tobias -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column
[ https://issues.apache.org/jira/browse/CASSANDRA-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368948#comment-14368948 ] Maxim edited comment on CASSANDRA-8985 at 3/19/15 11:45 AM: Here is my describe: CREATE KEYSPACE OracleCache WITH replication = { 'class': 'NetworkTopologyStrategy', 'DC2': '1', 'DC3': '2', 'DC1': '1' }; USE OracleCache; CREATE TABLE Certificate_Bruteforce_Protect ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientAccount ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientAccountInfo ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientInvites ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientProfile ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE DriverAccount ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE DriverProfiles ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='ALL' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ESTaxi_IpRestriction ( key blob,
[jira] [Commented] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column
[ https://issues.apache.org/jira/browse/CASSANDRA-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368948#comment-14368948 ] Maxim commented on CASSANDRA-8985: -- Here is my describe: {quote} CREATE KEYSPACE OracleCache WITH replication = { 'class': 'NetworkTopologyStrategy', 'DC2': '1', 'DC3': '2', 'DC1': '1' }; USE OracleCache; CREATE TABLE Certificate_Bruteforce_Protect ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientAccount ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientAccountInfo ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientInvites ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ClientProfile ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE DriverAccount ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='KEYS_ONLY' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE DriverProfiles ( key blob, column1 text, value blob, PRIMARY KEY ((key), column1) ) WITH COMPACT STORAGE AND bloom_filter_fp_chance=0.01 AND caching='ALL' AND comment='' AND dclocal_read_repair_chance=0.00 AND gc_grace_seconds=3600 AND index_interval=128 AND read_repair_chance=1.00 AND replicate_on_write='true' AND populate_io_cache_on_flush='false' AND default_time_to_live=0 AND speculative_retry='99.0PERCENTILE' AND memtable_flush_period_in_ms=0 AND compaction={'class': 'SizeTieredCompactionStrategy'} AND compression={'sstable_compression': 'SnappyCompressor'}; CREATE TABLE ESTaxi_IpRestriction ( key blob, column1 text, value blob, PRIMARY KEY
[jira] [Updated] (CASSANDRA-8553) Add a key-value payload for third party usage
[ https://issues.apache.org/jira/browse/CASSANDRA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-8553: Attachment: (was: 8553-v3.txt) Add a key-value payload for third party usage - Key: CASSANDRA-8553 URL: https://issues.apache.org/jira/browse/CASSANDRA-8553 Project: Cassandra Issue Type: Sub-task Reporter: Sergio Bossa Assignee: Robert Stupp Labels: client-impacting, protocolv4 Fix For: 3.0 Attachments: 8553-v2.txt, 8553.txt An useful improvement would be to include a generic key-value payload, so that developers implementing a custom {{QueryHandler}} could leverage that to move custom data back and forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ulises Cervino Beresi updated CASSANDRA-8991: - Attachment: (was: CASSANDRA-2.0.13-8991.patch) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8928) Add downgradesstables
[ https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368951#comment-14368951 ] Benedict commented on CASSANDRA-8928: - Data translations are inherently dangerous acts, and very hard to vet (see CASSANDRA-8993 which seems to be a result of upgrade corrupting index state so that records are silently not returned). Having a bidirectional translation seems particularly problematic. If we are to go down this route, we need to ensure HUGE effort is put in to exhaustively testing the resulting outputs, after a double cycle of upgrade/downgrade. This effort wouldn't be wasted though, as it would also help vet upgrades, which are essential acts. Add downgradesstables - Key: CASSANDRA-8928 URL: https://issues.apache.org/jira/browse/CASSANDRA-8928 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jeremy Hanna Priority: Minor As mentioned in other places such as CASSANDRA-8047 and in the wild, sometimes you need to go back. A downgrade sstables utility would be nice for a lot of reasons and I don't know that supporting going back to the previous major version format would be too much code since we already support reading the previous version. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ulises Cervino Beresi updated CASSANDRA-8991: - Attachment: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ulises Cervino Beresi updated CASSANDRA-8991: - Attachment: (was: CASSANDRA-2.0.13-8991.txt) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ulises Cervino Beresi updated CASSANDRA-8991: - Attachment: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7814) enable describe on indices
[ https://issues.apache.org/jira/browse/CASSANDRA-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368886#comment-14368886 ] Aleksey Yeschenko commented on CASSANDRA-7814: -- In consistency with {DROP INDEX}, {DESCRIBE INDEX} should also only take the index name, even though it's just a cqslh command. 2.1 and 3.0 C* versions. enable describe on indices -- Key: CASSANDRA-7814 URL: https://issues.apache.org/jira/browse/CASSANDRA-7814 Project: Cassandra Issue Type: Improvement Components: Core Reporter: radha Assignee: Stefania Priority: Minor Fix For: 3.0 Describe index should be supported, right now, the only way is to export the schema and find what it really is before updating/dropping the index. verified in [cqlsh 3.1.8 | Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8928) Add downgradesstables
[ https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368947#comment-14368947 ] Jeremy Hanna commented on CASSANDRA-8928: - [~benedict] would you like to bring up the points you made about viability of this feature? Add downgradesstables - Key: CASSANDRA-8928 URL: https://issues.apache.org/jira/browse/CASSANDRA-8928 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jeremy Hanna Priority: Minor As mentioned in other places such as CASSANDRA-8047 and in the wild, sometimes you need to go back. A downgrade sstables utility would be nice for a lot of reasons and I don't know that supporting going back to the previous major version format would be too much code since we already support reading the previous version. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ulises Cervino Beresi updated CASSANDRA-8991: - Attachment: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ulises Cervino Beresi updated CASSANDRA-8991: - Attachment: (was: CASSANDRA-2.0.13-8991.txt) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8357) ArrayOutOfBounds in cassandra-stress with inverted exponential distribution
[ https://issues.apache.org/jira/browse/CASSANDRA-8357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368946#comment-14368946 ] Benedict commented on CASSANDRA-8357: - It's likely whatever this bug is, it's still broken, but appreciate your trying latest. When you do, and in general if you encounter any problem with stress, if you could run with -log level=verbose so that we can get the full stack trace, that would be great. ArrayOutOfBounds in cassandra-stress with inverted exponential distribution --- Key: CASSANDRA-8357 URL: https://issues.apache.org/jira/browse/CASSANDRA-8357 Project: Cassandra Issue Type: Bug Components: Tools Environment: 6-node cassandra cluster (2.1.1) on debian. Reporter: Jens Preußner Fix For: 2.1.4 When using the CQLstress example from GitHub (https://github.com/apache/cassandra/blob/trunk/tools/cqlstress-example.yaml) with an inverted exponential distribution in the insert-partitions field, generated threads fail with Exception in thread Thread-20 java.lang.ArrayIndexOutOfBoundsException: 20 at org.apache.cassandra.stress.StressAction$Consumer.run(StressAction.java:307) See the gist https://gist.github.com/jenzopr/9edde53122554729c852 for the typetest.yaml I used. The call was: cassandra-stress user profile=typetest.yaml ops\(insert=1\) -node $NODES -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-8658) cassandra-stress should support distinct readers/writers for a mixed workload
[ https://issues.apache.org/jira/browse/CASSANDRA-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict resolved CASSANDRA-8658. - Resolution: Duplicate cassandra-stress should support distinct readers/writers for a mixed workload - Key: CASSANDRA-8658 URL: https://issues.apache.org/jira/browse/CASSANDRA-8658 Project: Cassandra Issue Type: Improvement Reporter: Benedict Assignee: Benedict Priority: Minor By default all threads assume they interleave reads with writes, whereas many workloads may be characterised by independent readers and writers. The difference is that by interleaving, if we have hit e.g. disk constraints for flush, write latencies may spike while read latencies are unaffected. By interleaving operations on each actor, all actors will rapidly becomes writers during such an event. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368861#comment-14368861 ] Benedict commented on CASSANDRA-8988: - OK, so instead of just getting the hacky quick wins, I decided to do it properly. I've introduced an AsymmetricOrdering class (I'm open to better names) that extends Ordering and accepts different types on each side for a second compare method. I've also taken this opportunity to introduce something I've been missing for a while: a better binary search, with NavigableMap semantics (floor, ceil, lower, higher). I then use this to simplify my implementation of this and 8920, which I have rebased onto this. Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Trivial Fix For: 2.1.4 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-8993: Attachment: 8993.txt Mitigation attached EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
Benedict created CASSANDRA-8993: --- Summary: EffectiveIndexInterval calculation is incorrect Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-7814) enable describe on indices
[ https://issues.apache.org/jira/browse/CASSANDRA-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368886#comment-14368886 ] Aleksey Yeschenko edited comment on CASSANDRA-7814 at 3/19/15 11:31 AM: In consistency with {{DROP INDEX}}, {{DESCRIBE INDEX}} should also only take the index name, even though it's just a cqslh command. 2.1 and 3.0 C* versions. was (Author: iamaleksey): In consistency with {DROP INDEX}, {DESCRIBE INDEX} should also only take the index name, even though it's just a cqslh command. 2.1 and 3.0 C* versions. enable describe on indices -- Key: CASSANDRA-7814 URL: https://issues.apache.org/jira/browse/CASSANDRA-7814 Project: Cassandra Issue Type: Improvement Components: Core Reporter: radha Assignee: Stefania Priority: Minor Fix For: 3.0 Describe index should be supported, right now, the only way is to export the schema and find what it really is before updating/dropping the index. verified in [cqlsh 3.1.8 | Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8920) Optimise sequential overlap visitation
[ https://issues.apache.org/jira/browse/CASSANDRA-8920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368868#comment-14368868 ] Benedict commented on CASSANDRA-8920: - I've rebased this ontop of CASSANDRA-8988, which reduces the boilerplate. Optimise sequential overlap visitation -- Key: CASSANDRA-8920 URL: https://issues.apache.org/jira/browse/CASSANDRA-8920 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 2.1.4 Attachments: 8920.txt The IntervalTree only maps partition keys. Since a majority of users deploy a hashed partitioner the work is mostly wasted, since they will be evenly distributed across the full token range owned by the node - and in some cases it is a significant amount of work. We can perform a corroboration against the file bounds if we get a BF match as a sanity check if we like, but performing an IntervalTree search is significantly more expensive (esp. once murmur hash calculation memoization goes mainstream). In LCS, the keys are bounded, to it might appear that it would help, but in this scenario we only compact against like bounds, so again it is not helpful. With a ByteOrderedPartitioner it could potentially be of use, but this is sufficiently rare to not optimise for IMO. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8553) Add a key-value payload for third party usage
[ https://issues.apache.org/jira/browse/CASSANDRA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-8553: Attachment: 8553-v3.txt Add a key-value payload for third party usage - Key: CASSANDRA-8553 URL: https://issues.apache.org/jira/browse/CASSANDRA-8553 Project: Cassandra Issue Type: Sub-task Reporter: Sergio Bossa Assignee: Robert Stupp Labels: client-impacting, protocolv4 Fix For: 3.0 Attachments: 8553-v2.txt, 8553-v3.txt, 8553.txt An useful improvement would be to include a generic key-value payload, so that developers implementing a custom {{QueryHandler}} could leverage that to move custom data back and forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14368959#comment-14368959 ] Branimir Lambov commented on CASSANDRA-8988: Nicely done. [AsymmetricOrdering 76|https://github.com/apache/cassandra/compare/trunk...belliottsmith:8988#diff-e8d8865e0c7d14541ae677a1c98bfca4R76]: I think this will be clearer if it follows the structure of the one above: {{if (c 0) i = m; else j = m;}} to make it obvious that the only difference between the two is the strictness of the comparison. [IntervalTree 235|https://github.com/apache/cassandra/compare/trunk...belliottsmith:8988#diff-e675fa5966322284415eff48ec0b36ffR235]: You should use CEIL (which is the same as LOWER + 1) here. It may be worth documenting in the Op declaration that CEIL == LOWER + 1, HIGHER == FLOOR + 1 (this is true because j is always equal to i+1 when exiting the loop in find2). [IntervalTree 236|https://github.com/apache/cassandra/compare/trunk...belliottsmith:8988#diff-e675fa5966322284415eff48ec0b36ffR236], [248|https://github.com/apache/cassandra/compare/trunk...belliottsmith:8988#diff-e675fa5966322284415eff48ec0b36ffR248]: I'm worried that this may be increasing the complexity over the original code for bigger trees as it does one more level of intersectsX search. It's your call, but my preference is to do the outside span rejection before the search as otherwise there is a possibility of performance regression from this, however small. Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Trivial Fix For: 2.1.4 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370665#comment-14370665 ] Jeff Jirsa commented on CASSANDRA-5791: --- So the same bug corrected in CASSANDRA-8778 was re-introduced by CASSANDRA-8709 , as it was developed in parallel and was likely merged/reviewed without the benefit of knowing about #8778. I've done the following: 1) Rebased to trunk as of 20150319 2) Removed o.a.c.io.DataIntegrityMetadata#append 3) Corrected o.a.c.io.DataIntegrityMetadata#appendDirect 4) Brought over [~benedict]'s PureJavaCRC32 's fix from above (which was correct - 7bef6f93aea3a6897b53e909688f5948c018ccdf) Commit: https://github.com/jeffjirsa/cassandra/commit/79642ea4f56a33f249e807abdd562f89d20f6c36 Diff at https://github.com/apache/cassandra/compare/trunk...jeffjirsa:cassandra-5791.diff , I'll also attach here as cassandra-5791-20150319.diff Passing: {noformat} [junit] - --- [junit] Testsuite: org.apache.cassandra.db.VerifyTest [junit] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.803 sec [junit] {noformat} [~jbellis] and [~benedict] - if you want unit tests for DataIntegrityMetadata, Jira it, assign me, and I'll write them. I'd have done it tonight but I can't convince myself that they're not redundant with the (included) verifier unit tests which will test the checksums anyway. A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli Assignee: Jeff Jirsa Priority: Minor Fix For: 3.0 Attachments: cassandra-5791-20150319.diff, cassandra-5791-patch-3.diff, cassandra-5791.patch-2 CUrrently there is no nodetool command to validate all sstables on disk. The only way to do this is to run a repair and see if it succeeds. But we cannot repair the system keyspace. Also we can run upgrade sstables but that re writes all the sstables. This command should check the hash of all sstables and return whether all data is readable all not. This should NOT care about consistency. The compressed sstables do not have hash so not sure how it will work there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8928) Add downgradesstables
[ https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370782#comment-14370782 ] Aleksey Yeschenko commented on CASSANDRA-8928: -- FWIW, {{SSTableDowngrader}} would be the most straightforward way to implement CASSANDRA-8110, so I wouldn't rush and close this just because it's 'hard'. And unless I'm misunderstanding the compatibility policy that would naturally follow from our latest release process announcements, CASSANDRA-8110 is something we'll need soon. Add downgradesstables - Key: CASSANDRA-8928 URL: https://issues.apache.org/jira/browse/CASSANDRA-8928 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jeremy Hanna Priority: Minor As mentioned in other places such as CASSANDRA-8047 and in the wild, sometimes you need to go back. A downgrade sstables utility would be nice for a lot of reasons and I don't know that supporting going back to the previous major version format would be too much code since we already support reading the previous version. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/2] cassandra git commit: fix typo
Repository: cassandra Updated Branches: refs/heads/trunk ce3053a23 - 850b5d0d2 fix typo Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86b04ad0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86b04ad0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86b04ad0 Branch: refs/heads/trunk Commit: 86b04ad0d4682bc49bd294d4d598d131f4da1158 Parents: e8276aa Author: Dave Brosius dbros...@mebigfatguy.com Authored: Thu Mar 19 21:12:25 2015 -0400 Committer: Dave Brosius dbros...@mebigfatguy.com Committed: Thu Mar 19 21:12:25 2015 -0400 -- src/java/org/apache/cassandra/dht/AbstractBounds.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/86b04ad0/src/java/org/apache/cassandra/dht/AbstractBounds.java -- diff --git a/src/java/org/apache/cassandra/dht/AbstractBounds.java b/src/java/org/apache/cassandra/dht/AbstractBounds.java index 6d2ee43..704d8c2 100644 --- a/src/java/org/apache/cassandra/dht/AbstractBounds.java +++ b/src/java/org/apache/cassandra/dht/AbstractBounds.java @@ -261,7 +261,7 @@ public abstract class AbstractBoundsT extends RingPositionT implements Seria public static T extends RingPositionT BoundaryT maxLeft(BoundaryT left1, BoundaryT left2) { -int c = left1.boundary.compareTo(left1.boundary); +int c = left1.boundary.compareTo(left2.boundary); if (c != 0) return c 0 ? left1 : left2; // return the exclusive version, if either
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/850b5d0d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/850b5d0d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/850b5d0d Branch: refs/heads/trunk Commit: 850b5d0d21014df8d672bd420d7a26dd07e8f828 Parents: ce3053a 86b04ad Author: Dave Brosius dbros...@mebigfatguy.com Authored: Thu Mar 19 21:13:02 2015 -0400 Committer: Dave Brosius dbros...@mebigfatguy.com Committed: Thu Mar 19 21:13:02 2015 -0400 -- src/java/org/apache/cassandra/dht/AbstractBounds.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/850b5d0d/src/java/org/apache/cassandra/dht/AbstractBounds.java --
cassandra git commit: fix typo
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 e8276aa53 - 86b04ad0d fix typo Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/86b04ad0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/86b04ad0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/86b04ad0 Branch: refs/heads/cassandra-2.1 Commit: 86b04ad0d4682bc49bd294d4d598d131f4da1158 Parents: e8276aa Author: Dave Brosius dbros...@mebigfatguy.com Authored: Thu Mar 19 21:12:25 2015 -0400 Committer: Dave Brosius dbros...@mebigfatguy.com Committed: Thu Mar 19 21:12:25 2015 -0400 -- src/java/org/apache/cassandra/dht/AbstractBounds.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/86b04ad0/src/java/org/apache/cassandra/dht/AbstractBounds.java -- diff --git a/src/java/org/apache/cassandra/dht/AbstractBounds.java b/src/java/org/apache/cassandra/dht/AbstractBounds.java index 6d2ee43..704d8c2 100644 --- a/src/java/org/apache/cassandra/dht/AbstractBounds.java +++ b/src/java/org/apache/cassandra/dht/AbstractBounds.java @@ -261,7 +261,7 @@ public abstract class AbstractBoundsT extends RingPositionT implements Seria public static T extends RingPositionT BoundaryT maxLeft(BoundaryT left1, BoundaryT left2) { -int c = left1.boundary.compareTo(left1.boundary); +int c = left1.boundary.compareTo(left2.boundary); if (c != 0) return c 0 ? left1 : left2; // return the exclusive version, if either
[jira] [Updated] (CASSANDRA-7814) enable describe on indices
[ https://issues.apache.org/jira/browse/CASSANDRA-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-7814: Fix Version/s: (was: 3.0) 2.1.4 enable describe on indices -- Key: CASSANDRA-7814 URL: https://issues.apache.org/jira/browse/CASSANDRA-7814 Project: Cassandra Issue Type: Improvement Components: Core Reporter: radha Assignee: Stefania Priority: Minor Fix For: 2.1.4 Describe index should be supported, right now, the only way is to export the schema and find what it really is before updating/dropping the index. verified in [cqlsh 3.1.8 | Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370665#comment-14370665 ] Jeff Jirsa edited comment on CASSANDRA-5791 at 3/20/15 4:54 AM: So the same bug corrected in CASSANDRA-8778 was re-introduced by CASSANDRA-8709 . I've done the following: 1) Rebased to trunk as of 20150319 2) Removed o.a.c.io.DataIntegrityMetadata#append 3) Corrected o.a.c.io.DataIntegrityMetadata#appendDirect 4) Brought over [~benedict]'s PureJavaCRC32 's fix from above (which was correct - 7bef6f93aea3a6897b53e909688f5948c018ccdf) Commit: https://github.com/jeffjirsa/cassandra/commit/79642ea4f56a33f249e807abdd562f89d20f6c36 Diff at https://github.com/apache/cassandra/compare/trunk...jeffjirsa:cassandra-5791.diff , I'll also attach here as cassandra-5791-20150319.diff Passing: {noformat} [junit] - --- [junit] Testsuite: org.apache.cassandra.db.VerifyTest [junit] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.803 sec [junit] {noformat} [~jbellis] and [~benedict] - if you want unit tests for DataIntegrityMetadata, Jira it, assign me, and I'll write them. I'd have done it tonight but I can't convince myself that they're not redundant with the (included) verifier unit tests which will test the checksums anyway. was (Author: jjirsa): So the same bug corrected in CASSANDRA-8778 was re-introduced by CASSANDRA-8709 , as it was developed in parallel and was likely merged/reviewed without the benefit of knowing about #8778. I've done the following: 1) Rebased to trunk as of 20150319 2) Removed o.a.c.io.DataIntegrityMetadata#append 3) Corrected o.a.c.io.DataIntegrityMetadata#appendDirect 4) Brought over [~benedict]'s PureJavaCRC32 's fix from above (which was correct - 7bef6f93aea3a6897b53e909688f5948c018ccdf) Commit: https://github.com/jeffjirsa/cassandra/commit/79642ea4f56a33f249e807abdd562f89d20f6c36 Diff at https://github.com/apache/cassandra/compare/trunk...jeffjirsa:cassandra-5791.diff , I'll also attach here as cassandra-5791-20150319.diff Passing: {noformat} [junit] - --- [junit] Testsuite: org.apache.cassandra.db.VerifyTest [junit] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.803 sec [junit] {noformat} [~jbellis] and [~benedict] - if you want unit tests for DataIntegrityMetadata, Jira it, assign me, and I'll write them. I'd have done it tonight but I can't convince myself that they're not redundant with the (included) verifier unit tests which will test the checksums anyway. A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli Assignee: Jeff Jirsa Priority: Minor Fix For: 3.0 Attachments: cassandra-5791-20150319.diff, cassandra-5791-patch-3.diff, cassandra-5791.patch-2 CUrrently there is no nodetool command to validate all sstables on disk. The only way to do this is to run a repair and see if it succeeds. But we cannot repair the system keyspace. Also we can run upgrade sstables but that re writes all the sstables. This command should check the hash of all sstables and return whether all data is readable all not. This should NOT care about consistency. The compressed sstables do not have hash so not sure how it will work there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-5791: -- Attachment: cassandra-5791-20150319.diff A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli Assignee: Jeff Jirsa Priority: Minor Fix For: 3.0 Attachments: cassandra-5791-20150319.diff, cassandra-5791-patch-3.diff, cassandra-5791.patch-2 CUrrently there is no nodetool command to validate all sstables on disk. The only way to do this is to run a repair and see if it succeeds. But we cannot repair the system keyspace. Also we can run upgrade sstables but that re writes all the sstables. This command should check the hash of all sstables and return whether all data is readable all not. This should NOT care about consistency. The compressed sstables do not have hash so not sure how it will work there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9011) Identify missing test coverage by component
Ariel Weisberg created CASSANDRA-9011: - Summary: Identify missing test coverage by component Key: CASSANDRA-9011 URL: https://issues.apache.org/jira/browse/CASSANDRA-9011 Project: Cassandra Issue Type: Task Reporter: Ariel Weisberg Assignee: Benedict Identify components that have bad/missing coverage (could be whitebox or blackbox). Make suggestions for what kind of test would provide the necessary coverage for each component. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9012) Triage missing test coverage
Ariel Weisberg created CASSANDRA-9012: - Summary: Triage missing test coverage Key: CASSANDRA-9012 URL: https://issues.apache.org/jira/browse/CASSANDRA-9012 Project: Cassandra Issue Type: Task Reporter: Ariel Weisberg Assignee: Ariel Weisberg Review, sort, prioritize. Discuss result order and refine as necessary. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8553) Add a key-value payload for third party usage
[ https://issues.apache.org/jira/browse/CASSANDRA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369737#comment-14369737 ] Robert Stupp commented on CASSANDRA-8553: - Cool trick making a final field non-final - didn't knew that this is possible. Removing the methods that don't take {{customPayload}} from {{QueryHandler}} works - except that I had to update {{CassandraServer}}. I think the possibility to provide a custom {{QueryHandler}} via {{cassandra.custom_query_handler_class}} is a bit broken since some functionalities bypass {{ClientState.getCQLQueryHandler()}} by using {{QueryProcessor.instance}} (that’s why I deleted v3 of the patch). {{QueryProcessor}} is heavily used from {{CassandraAuthorizer}} and {{CassandraRoleManager}}. Beside these, there are some unit tests, that use {{QueryProcessor}} directly - but these are fine IMO. I could check whether all utests still pass after usages of {{QueryProcessor.instance}} are replaced with {{ClientState.getCQLQueryHandler()}}. Altogether I’m not sure whether the authentication stuff can go through the custom implementation or has to stick with {{QueryProcessor}}. However - if the auth stuff should be migrated to {{QueryHandler}}, it would be a bit bigger than a simple search-and-replace. LD;DR: Should {{CassandraAuthorizer}} and {{CassandraRoleManager}} (and anything else) use {{QueryHandler}} instead of {{QueryProcessor}}? BTW: I’ve addressed the other issues and rebased my branch. Add a key-value payload for third party usage - Key: CASSANDRA-8553 URL: https://issues.apache.org/jira/browse/CASSANDRA-8553 Project: Cassandra Issue Type: Sub-task Reporter: Sergio Bossa Assignee: Robert Stupp Labels: client-impacting, protocolv4 Fix For: 3.0 Attachments: 8553-v2.txt, 8553.txt An useful improvement would be to include a generic key-value payload, so that developers implementing a custom {{QueryHandler}} could leverage that to move custom data back and forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369750#comment-14369750 ] Jonathan Ellis commented on CASSANDRA-5791: --- reverted in b25adc765769869d16410f1ca156227745d9b17b until the tests can be fixed A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli Assignee: Jeff Jirsa Priority: Minor Fix For: 3.0 Attachments: cassandra-5791-patch-3.diff, cassandra-5791.patch-2 CUrrently there is no nodetool command to validate all sstables on disk. The only way to do this is to run a repair and see if it succeeds. But we cannot repair the system keyspace. Also we can run upgrade sstables but that re writes all the sstables. This command should check the hash of all sstables and return whether all data is readable all not. This should NOT care about consistency. The compressed sstables do not have hash so not sure how it will work there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8553) Add a key-value payload for third party usage
[ https://issues.apache.org/jira/browse/CASSANDRA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369771#comment-14369771 ] Sam Tunnicliffe commented on CASSANDRA-8553: {{QueryHandler}} is for statements which originate from client requests, internal stuff generally uses {{QueryProccessor}} directly. From CASSANDRA-6659: bq. \[provide an interface to\] allow users to provide a specific class of their own (implementing said interface) to which the native protocol would handoff queries Which is why {{CassandraAuthorizer}} and {{CassandraRoleManager}} ( {{PasswordAuthenticator}} while we're in the auth subsystem) do it that way. Add a key-value payload for third party usage - Key: CASSANDRA-8553 URL: https://issues.apache.org/jira/browse/CASSANDRA-8553 Project: Cassandra Issue Type: Sub-task Reporter: Sergio Bossa Assignee: Robert Stupp Labels: client-impacting, protocolv4 Fix For: 3.0 Attachments: 8553-v2.txt, 8553.txt An useful improvement would be to include a generic key-value payload, so that developers implementing a custom {{QueryHandler}} could leverage that to move custom data back and forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Revert nodetool command to validate all sstables in a node
Repository: cassandra Updated Branches: refs/heads/trunk 1279009e0 - b25adc765 Revert nodetool command to validate all sstables in a node This reverts commit 21bdf8700601f8150e8c13e0b4f71e061822c802. Conflicts: CHANGES.txt src/java/org/apache/cassandra/io/util/DataIntegrityMetadata.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b25adc76 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b25adc76 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b25adc76 Branch: refs/heads/trunk Commit: b25adc765769869d16410f1ca156227745d9b17b Parents: 1279009 Author: Jonathan Ellis jbel...@apache.org Authored: Thu Mar 19 12:22:45 2015 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Thu Mar 19 12:22:45 2015 -0500 -- CHANGES.txt | 1 - bin/sstableverify | 55 --- bin/sstableverify.bat | 41 -- .../apache/cassandra/db/ColumnFamilyStore.java | 5 - .../db/compaction/CompactionManager.java| 36 -- .../cassandra/db/compaction/OperationType.java | 3 +- .../cassandra/db/compaction/Verifier.java | 280 .../apache/cassandra/io/sstable/Component.java | 4 +- .../io/util/DataIntegrityMetadata.java | 53 --- .../cassandra/service/StorageService.java | 12 - .../cassandra/service/StorageServiceMBean.java | 8 - .../org/apache/cassandra/tools/NodeProbe.java | 19 +- .../org/apache/cassandra/tools/NodeTool.java| 35 +- .../cassandra/tools/StandaloneVerifier.java | 222 -- .../org/apache/cassandra/db/VerifyTest.java | 428 --- 15 files changed, 7 insertions(+), 1195 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b25adc76/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 955d8e3..2f4764b 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,6 +1,5 @@ 3.0 * Partition intra-cluster message streams by size, not type (CASSANDRA-8789) - * Add nodetool command to validate all sstables in a node (CASSANDRA-5791) * Add WriteFailureException to native protocol, notify coordinator of write failures (CASSANDRA-8592) * Convert SequentialWriter to nio (CASSANDRA-8709) http://git-wip-us.apache.org/repos/asf/cassandra/blob/b25adc76/bin/sstableverify -- diff --git a/bin/sstableverify b/bin/sstableverify deleted file mode 100644 index c3e40c7..000 --- a/bin/sstableverify +++ /dev/null @@ -1,55 +0,0 @@ -#!/bin/sh - -# Licensed to the Apache Software Foundation (ASF) under one -# or more contributor license agreements. See the NOTICE file -# distributed with this work for additional information -# regarding copyright ownership. The ASF licenses this file -# to you under the Apache License, Version 2.0 (the -# License); you may not use this file except in compliance -# with the License. You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an AS IS BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -if [ x$CASSANDRA_INCLUDE = x ]; then -for include in /usr/share/cassandra/cassandra.in.sh \ - /usr/local/share/cassandra/cassandra.in.sh \ - /opt/cassandra/cassandra.in.sh \ - ~/.cassandra.in.sh \ - `dirname $0`/cassandra.in.sh; do -if [ -r $include ]; then -. $include -break -fi -done -elif [ -r $CASSANDRA_INCLUDE ]; then -. $CASSANDRA_INCLUDE -fi - -# Use JAVA_HOME if set, otherwise look for java in PATH -if [ -x $JAVA_HOME/bin/java ]; then -JAVA=$JAVA_HOME/bin/java -else -JAVA=`which java` -fi - -if [ -z $CLASSPATH ]; then -echo You must set the CLASSPATH var 2 -exit 1 -fi - -if [ x$MAX_HEAP_SIZE = x ]; then -MAX_HEAP_SIZE=256M -fi - -$JAVA $JAVA_AGENT -ea -cp $CLASSPATH -Xmx$MAX_HEAP_SIZE \ --Dcassandra.storagedir=$cassandra_storagedir \ --Dlogback.configurationFile=logback-tools.xml \ -org.apache.cassandra.tools.StandaloneVerifier $@ - -# vi:ai sw=4 ts=4 tw=0 et http://git-wip-us.apache.org/repos/asf/cassandra/blob/b25adc76/bin/sstableverify.bat -- diff --git a/bin/sstableverify.bat b/bin/sstableverify.bat deleted file mode 100644 index aa08826..000 --- a/bin/sstableverify.bat
[jira] [Commented] (CASSANDRA-8942) Keep node up even when bootstrap is failed (and provide tool to resume bootstrap)
[ https://issues.apache.org/jira/browse/CASSANDRA-8942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369778#comment-14369778 ] Sam Tunnicliffe commented on CASSANDRA-8942: +1 both LGTM trivial nit: should use a wildcard import for o.a.c.streaming in Bootstrapper Keep node up even when bootstrap is failed (and provide tool to resume bootstrap) - Key: CASSANDRA-8942 URL: https://issues.apache.org/jira/browse/CASSANDRA-8942 Project: Cassandra Issue Type: Sub-task Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Minor Fix For: 3.0 With CASSANDRA-8838, we can keep bootstrapping node up when some streaming failed, if we provide tool to resume failed bootstrap streaming. Failed bootstrap node enters the mode similar to 'write survey mode'. So other nodes in the cluster still view it as bootstrapping, though they send writes to bootstrapping node as well. Providing new nodetool command to resume bootstrap from saved bootstrap state, we can continue bootstrapping after resolving issue that caused previous bootstrap failure. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-8995) Move NodeToolCmd subclasses to their own package
[ https://issues.apache.org/jira/browse/CASSANDRA-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita reassigned CASSANDRA-8995: - Assignee: Yuki Morishita Move NodeToolCmd subclasses to their own package Key: CASSANDRA-8995 URL: https://issues.apache.org/jira/browse/CASSANDRA-8995 Project: Cassandra Issue Type: Task Reporter: Yuki Morishita Assignee: Yuki Morishita Priority: Trivial Fix For: 3.0 We now have nearly 80 nodetool commands that are written as innter classes inside NodeTool. We should move out those inner classes to their own package (o.a.c.tools.nodetool?) for easier maintainance for the future. We still have couples of fixes to nodetool to be committed, so this shuold be done right before we branch out 3.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8995) Move NodeToolCmd subclasses to their own package
Yuki Morishita created CASSANDRA-8995: - Summary: Move NodeToolCmd subclasses to their own package Key: CASSANDRA-8995 URL: https://issues.apache.org/jira/browse/CASSANDRA-8995 Project: Cassandra Issue Type: Task Reporter: Yuki Morishita Priority: Trivial Fix For: 3.0 We now have nearly 80 nodetool commands that are written as innter classes inside NodeTool. We should move out those inner classes to their own package (o.a.c.tools.nodetool?) for easier maintainance for the future. We still have couples of fixes to nodetool to be committed, so this shuold be done right before we branch out 3.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[4/5] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce3053a2/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java -- diff --cc src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java index 6d1dc09,000..f014640 mode 100644,00..100644 --- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java @@@ -1,2031 -1,0 +1,2031 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.io.sstable.format; + +import java.io.*; +import java.nio.ByteBuffer; +import java.util.*; +import java.util.concurrent.*; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicLong; + +import com.google.common.annotations.VisibleForTesting; +import com.google.common.base.Predicate; +import com.google.common.collect.Iterators; +import com.google.common.collect.Ordering; +import com.google.common.primitives.Longs; +import com.google.common.util.concurrent.RateLimiter; + +import com.clearspring.analytics.stream.cardinality.CardinalityMergeException; +import com.clearspring.analytics.stream.cardinality.HyperLogLogPlus; +import com.clearspring.analytics.stream.cardinality.ICardinality; +import org.apache.cassandra.cache.CachingOptions; +import org.apache.cassandra.cache.InstrumentingCache; +import org.apache.cassandra.cache.KeyCacheKey; +import org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor; +import org.apache.cassandra.concurrent.ScheduledExecutors; +import org.apache.cassandra.config.*; +import org.apache.cassandra.db.*; +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator; +import org.apache.cassandra.db.commitlog.ReplayPosition; +import org.apache.cassandra.db.composites.CellName; +import org.apache.cassandra.db.filter.ColumnSlice; +import org.apache.cassandra.db.index.SecondaryIndex; +import org.apache.cassandra.dht.*; +import org.apache.cassandra.io.compress.CompressionMetadata; +import org.apache.cassandra.io.sstable.*; +import org.apache.cassandra.io.sstable.metadata.*; +import org.apache.cassandra.io.util.*; +import org.apache.cassandra.metrics.RestorableMeter; +import org.apache.cassandra.metrics.StorageMetrics; +import org.apache.cassandra.service.ActiveRepairService; +import org.apache.cassandra.service.CacheService; +import org.apache.cassandra.service.StorageService; +import org.apache.cassandra.utils.*; +import org.apache.cassandra.utils.concurrent.OpOrder; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import org.apache.cassandra.utils.concurrent.Ref; +import org.apache.cassandra.utils.concurrent.RefCounted; +import org.apache.cassandra.utils.concurrent.SelfRefCounted; + +import static org.apache.cassandra.db.Directories.SECONDARY_INDEX_NAME_SEPARATOR; + +/** + * An SSTableReader can be constructed in a number of places, but typically is either + * read from disk at startup, or constructed from a flushed memtable, or after compaction + * to replace some existing sstables. However once created, an sstablereader may also be modified. + * + * A reader's OpenReason describes its current stage in its lifecycle, as follows: + * + * NORMAL + * From: None= Reader has been read from disk, either at startup or from a flushed memtable + * EARLY = Reader is the final result of a compaction + * MOVED_START = Reader WAS being compacted, but this failed and it has been restored to NORMAL status + * + * EARLY + * From: None= Reader is a compaction replacement that is either incomplete and has been opened + *to represent its partial result status, or has been finished but the compaction + *it is a part of has not yet completed fully + * EARLY = Same as from None, only it is not the first time it has been + * + * MOVED_START + * From: NORMAL = Reader is being compacted. This compaction has not finished, but the compaction result + *is
[jira] [Commented] (CASSANDRA-8553) Add a key-value payload for third party usage
[ https://issues.apache.org/jira/browse/CASSANDRA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370023#comment-14370023 ] Robert Stupp commented on CASSANDRA-8553: - Thanks - that explains why {{QueryProcessor}} is used for auth and low level system stuff. Here are some more places I found by searching for usages of {{QueryProcessor}}: * {{BatchMessage.execute()}}: If the batch contains a {{String}}, {{QueryProcessor.parseStatement((String)query, state);}} is called otherwise it uses {{handler.getPrepared((MD5Digest)query);}} * {{CQLMetrics.init()}} uses {{QueryProcessor.preparedStatementsCount();}} * {{CQLSSTableWriter.Builder.getStatement()}} uses {{QueryProcessor.getStatement(query, state);}} I thinks the last two are ok. But the one in {{BatchManager}} looks strange in the context of a custom {{QueryHandler}} - basically it would require a {{QueryHandler.parse()}} method with the same signature as {{prepare()}}. The metrics thing would also just need a method in the interface. Add a key-value payload for third party usage - Key: CASSANDRA-8553 URL: https://issues.apache.org/jira/browse/CASSANDRA-8553 Project: Cassandra Issue Type: Sub-task Reporter: Sergio Bossa Assignee: Robert Stupp Labels: client-impacting, protocolv4 Fix For: 3.0 Attachments: 8553-v2.txt, 8553.txt An useful improvement would be to include a generic key-value payload, so that developers implementing a custom {{QueryHandler}} could leverage that to move custom data back and forth. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[2/5] cassandra git commit: IndexSummary effective interval is a guideline, not a rule
IndexSummary effective interval is a guideline, not a rule patch by benedict; reviewed by tyler for CASSANDRA-8993 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e8276aa5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e8276aa5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e8276aa5 Branch: refs/heads/trunk Commit: e8276aa539ae618a41acb8b0250886e773b73ef3 Parents: 763130b Author: Benedict Elliott Smith bened...@apache.org Authored: Thu Mar 19 20:09:29 2015 + Committer: Benedict Elliott Smith bened...@apache.org Committed: Thu Mar 19 20:09:29 2015 + -- CHANGES.txt| 1 + .../org/apache/cassandra/io/sstable/SSTableReader.java | 8 .../apache/cassandra/io/sstable/IndexSummaryTest.java | 13 ++--- 3 files changed, 15 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8276aa5/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3f96330..14a45a3 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * IndexSummary effectiveIndexInterval is now a guideline, not a rule (CASSANDRA-8993) * Use correct bounds for page cache eviction of compressed files (CASSANDRA-8746) * SSTableScanner enforces its bounds (CASSANDRA-8946) * Cleanup cell equality (CASSANDRA-8947) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8276aa5/src/java/org/apache/cassandra/io/sstable/SSTableReader.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java index e4a6e85..2bc32d3 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java @@ -1083,7 +1083,7 @@ public class SSTableReader extends SSTable implements SelfRefCountedSSTableRead return referencedIndexSummary.getPosition(getIndexSummaryIndexFromBinarySearchResult(binarySearchResult)); } -private static int getIndexSummaryIndexFromBinarySearchResult(int binarySearchResult) +public static int getIndexSummaryIndexFromBinarySearchResult(int binarySearchResult) { if (binarySearchResult 0) { @@ -1466,12 +1466,12 @@ public class SSTableReader extends SSTable implements SelfRefCountedSSTableRead // of the next interval). int i = 0; IteratorFileDataInput segments = ifile.iterator(sampledPosition); -while (segments.hasNext() i = effectiveInterval) +while (segments.hasNext()) { FileDataInput in = segments.next(); try { -while (!in.isEOF() i = effectiveInterval) +while (!in.isEOF()) { i++; @@ -1481,7 +1481,7 @@ public class SSTableReader extends SSTable implements SelfRefCountedSSTableRead boolean exactMatch; // is the current position an exact match for the key, suitable for caching // Compare raw keys if possible for performance, otherwise compare decorated keys. -if (op == Operator.EQ) +if (op == Operator.EQ i = effectiveInterval) { opSatisfied = exactMatch = indexKey.equals(((DecoratedKey) key).getKey()); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8276aa5/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java -- diff --git a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java index 95183d4..0760aa3 100644 --- a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java +++ b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java @@ -222,13 +222,20 @@ public class IndexSummaryTest original.close(); } -private void testPosition(IndexSummary original, IndexSummary downsampled, IterableDecoratedKey keys) +private void testPosition(IndexSummary original, IndexSummary downsampled, ListDecoratedKey keys) { for (DecoratedKey key : keys) { long orig = SSTableReader.getIndexScanPositionFromBinarySearchResult(original.binarySearch(key), original); -long down = SSTableReader.getIndexScanPositionFromBinarySearchResult(downsampled.binarySearch(key), downsampled); -assert down = orig; +int binarySearch = downsampled.binarySearch(key); +int index =
[5/5] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Conflicts: src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ce3053a2 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ce3053a2 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ce3053a2 Branch: refs/heads/trunk Commit: ce3053a23f7689086ca704242ced73687b3c0318 Parents: 4597bb5 e8276aa Author: Benedict Elliott Smith bened...@apache.org Authored: Thu Mar 19 20:12:56 2015 + Committer: Benedict Elliott Smith bened...@apache.org Committed: Thu Mar 19 20:12:56 2015 + -- CHANGES.txt| 1 + .../cassandra/io/sstable/format/SSTableReader.java | 2 +- .../io/sstable/format/big/BigTableReader.java | 6 +++--- .../apache/cassandra/io/sstable/IndexSummaryTest.java | 13 ++--- 4 files changed, 15 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce3053a2/CHANGES.txt -- diff --cc CHANGES.txt index 2f4764b,14a45a3..cd39890 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,78 -1,5 +1,79 @@@ +3.0 + * Partition intra-cluster message streams by size, not type (CASSANDRA-8789) + * Add WriteFailureException to native protocol, notify coordinator of + write failures (CASSANDRA-8592) + * Convert SequentialWriter to nio (CASSANDRA-8709) + * Add role based access control (CASSANDRA-7653, 8650, 7216, 8760, 8849, 8761, 8850) + * Record client ip address in tracing sessions (CASSANDRA-8162) + * Indicate partition key columns in response metadata for prepared + statements (CASSANDRA-7660) + * Merge UUIDType and TimeUUIDType parse logic (CASSANDRA-8759) + * Avoid memory allocation when searching index summary (CASSANDRA-8793) + * Optimise (Time)?UUIDType Comparisons (CASSANDRA-8730) + * Make CRC32Ex into a separate maven dependency (CASSANDRA-8836) + * Use preloaded jemalloc w/ Unsafe (CASSANDRA-8714) + * Avoid accessing partitioner through StorageProxy (CASSANDRA-8244, 8268) + * Upgrade Metrics library and remove depricated metrics (CASSANDRA-5657) + * Serializing Row cache alternative, fully off heap (CASSANDRA-7438) + * Duplicate rows returned when in clause has repeated values (CASSANDRA-6707) + * Make CassandraException unchecked, extend RuntimeException (CASSANDRA-8560) + * Support direct buffer decompression for reads (CASSANDRA-8464) + * DirectByteBuffer compatible LZ4 methods (CASSANDRA-7039) + * Group sstables for anticompaction correctly (CASSANDRA-8578) + * Add ReadFailureException to native protocol, respond + immediately when replicas encounter errors while handling + a read request (CASSANDRA-7886) + * Switch CommitLogSegment from RandomAccessFile to nio (CASSANDRA-8308) + * Allow mixing token and partition key restrictions (CASSANDRA-7016) + * Support index key/value entries on map collections (CASSANDRA-8473) + * Modernize schema tables (CASSANDRA-8261) + * Support for user-defined aggregation functions (CASSANDRA-8053) + * Fix NPE in SelectStatement with empty IN values (CASSANDRA-8419) + * Refactor SelectStatement, return IN results in natural order instead + of IN value list order and ignore duplicate values in partition key IN restrictions (CASSANDRA-7981) + * Support UDTs, tuples, and collections in user-defined + functions (CASSANDRA-7563) + * Fix aggregate fn results on empty selection, result column name, + and cqlsh parsing (CASSANDRA-8229) + * Mark sstables as repaired after full repair (CASSANDRA-7586) + * Extend Descriptor to include a format value and refactor reader/writer + APIs (CASSANDRA-7443) + * Integrate JMH for microbenchmarks (CASSANDRA-8151) + * Keep sstable levels when bootstrapping (CASSANDRA-7460) + * Add Sigar library and perform basic OS settings check on startup (CASSANDRA-7838) + * Support for aggregation functions (CASSANDRA-4914) + * Remove cassandra-cli (CASSANDRA-7920) + * Accept dollar quoted strings in CQL (CASSANDRA-7769) + * Make assassinate a first class command (CASSANDRA-7935) + * Support IN clause on any partition key column (CASSANDRA-7855) + * Support IN clause on any clustering column (CASSANDRA-4762) + * Improve compaction logging (CASSANDRA-7818) + * Remove YamlFileNetworkTopologySnitch (CASSANDRA-7917) + * Do anticompaction in groups (CASSANDRA-6851) + * Support user-defined functions (CASSANDRA-7395, 7526, 7562, 7740, 7781, 7929, + 7924, 7812, 8063, 7813, 7708) + * Permit configurable timestamps with cassandra-stress (CASSANDRA-7416) + * Move sstable RandomAccessReader to nio2, which allows using the + FILE_SHARE_DELETE flag on Windows (CASSANDRA-4050) + * Remove CQL2
[3/5] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
http://git-wip-us.apache.org/repos/asf/cassandra/blob/ce3053a2/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java -- diff --cc src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java index dec9f11,000..baf6d51 mode 100644,00..100644 --- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableReader.java @@@ -1,260 -1,0 +1,260 @@@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.io.sstable.format.big; + +import com.google.common.util.concurrent.RateLimiter; +import org.apache.cassandra.cache.KeyCacheKey; +import org.apache.cassandra.config.CFMetaData; +import org.apache.cassandra.db.DataRange; +import org.apache.cassandra.db.DecoratedKey; +import org.apache.cassandra.db.RowIndexEntry; +import org.apache.cassandra.db.RowPosition; +import org.apache.cassandra.db.columniterator.OnDiskAtomIterator; +import org.apache.cassandra.db.composites.CellName; +import org.apache.cassandra.db.filter.ColumnSlice; +import org.apache.cassandra.dht.IPartitioner; +import org.apache.cassandra.dht.Range; +import org.apache.cassandra.dht.Token; +import org.apache.cassandra.io.sstable.Component; +import org.apache.cassandra.io.sstable.CorruptSSTableException; +import org.apache.cassandra.io.sstable.Descriptor; +import org.apache.cassandra.io.sstable.ISSTableScanner; +import org.apache.cassandra.io.sstable.format.SSTableReader; +import org.apache.cassandra.io.sstable.metadata.StatsMetadata; +import org.apache.cassandra.io.util.FileDataInput; +import org.apache.cassandra.io.util.FileUtils; +import org.apache.cassandra.tracing.Tracing; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.*; + +/** + * SSTableReaders are open()ed by Keyspace.onStart; after that they are created by SSTableWriter.renameAndOpen. + * Do not re-call open() on existing SSTable files; use the references kept by ColumnFamilyStore post-start instead. + */ +public class BigTableReader extends SSTableReader +{ +private static final Logger logger = LoggerFactory.getLogger(BigTableReader.class); + +BigTableReader(Descriptor desc, SetComponent components, CFMetaData metadata, IPartitioner partitioner, Long maxDataAge, StatsMetadata sstableMetadata, OpenReason openReason) +{ +super(desc, components, metadata, partitioner, maxDataAge, sstableMetadata, openReason); +} + +public OnDiskAtomIterator iterator(DecoratedKey key, SortedSetCellName columns) +{ +return new SSTableNamesIterator(this, key, columns); +} + +public OnDiskAtomIterator iterator(FileDataInput input, DecoratedKey key, SortedSetCellName columns, RowIndexEntry indexEntry ) +{ +return new SSTableNamesIterator(this, input, key, columns, indexEntry); +} + +public OnDiskAtomIterator iterator(DecoratedKey key, ColumnSlice[] slices, boolean reverse) +{ +return new SSTableSliceIterator(this, key, slices, reverse); +} + +public OnDiskAtomIterator iterator(FileDataInput input, DecoratedKey key, ColumnSlice[] slices, boolean reverse, RowIndexEntry indexEntry) +{ +return new SSTableSliceIterator(this, input, key, slices, reverse, indexEntry); +} +/** + * + * @param dataRange filter to use when reading the columns + * @return A Scanner for seeking over the rows of the SSTable. + */ +public ISSTableScanner getScanner(DataRange dataRange, RateLimiter limiter) +{ +return BigTableScanner.getScanner(this, dataRange, limiter); +} + + +/** + * Direct I/O SSTableScanner over a defined collection of ranges of tokens. + * + * @param ranges the range of keys to cover + * @return A Scanner for seeking over the rows of the SSTable. + */ +public ISSTableScanner getScanner(CollectionRangeToken ranges, RateLimiter limiter) +{ +
[jira] [Commented] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369967#comment-14369967 ] Benedict commented on CASSANDRA-8993: - Early opening doesn't touch the index or index summary contents. I've reproduced the problem on the raw sstable directly with some hacky code to just open the index and summary directly. The sstable is a prior format, so it has never been successfully subjected to early opening or rewriting. You can download the files from CASSANDRA-8851 yourself, and I can provide you with the gpg key. I suspect the problem is related to the elimination of indexInterval from CfMetaData prematurely. It looks like it is needed to establish the actual sampling level - users that have modified this will have the incorrect level set after upgrade until their sstables are rewritten. EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369973#comment-14369973 ] Benedict commented on CASSANDRA-8993: - Either way, I think it would be helpful to expand the current test coverage to include summaries after the serialization (under different de/serialization settings) EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369967#comment-14369967 ] Benedict edited comment on CASSANDRA-8993 at 3/19/15 7:52 PM: -- Early opening doesn't touch the index or index summary contents. I've reproduced the problem on the raw sstable directly with some hacky code to just open the index and summary directly. The sstable is a prior format, so it has never been successfully subjected to early opening or rewriting. You can download the files from CASSANDRA-8851 yourself, and I can provide you with the gpg key. was (Author: benedict): Early opening doesn't touch the index or index summary contents. I've reproduced the problem on the raw sstable directly with some hacky code to just open the index and summary directly. The sstable is a prior format, so it has never been successfully subjected to early opening or rewriting. You can download the files from CASSANDRA-8851 yourself, and I can provide you with the gpg key. I suspect the problem is related to the elimination of indexInterval from CfMetaData prematurely. It looks like it is needed to establish the actual sampling level - users that have modified this will have the incorrect level set after upgrade until their sstables are rewritten. EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369998#comment-14369998 ] Benedict commented on CASSANDRA-8993: - I have had another closer look. It seems that we're reading an interval of 128, when in fact it is 2048. Exactly why this is happening, I don't know. I've tried to poll git to see where it's gone wrong, but it's probably more easily done by someone with knowledge of the history here. EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/5] cassandra git commit: IndexSummary effective interval is a guideline, not a rule
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 763130bdb - e8276aa53 refs/heads/trunk 4597bb5b1 - ce3053a23 IndexSummary effective interval is a guideline, not a rule patch by benedict; reviewed by tyler for CASSANDRA-8993 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e8276aa5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e8276aa5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e8276aa5 Branch: refs/heads/cassandra-2.1 Commit: e8276aa539ae618a41acb8b0250886e773b73ef3 Parents: 763130b Author: Benedict Elliott Smith bened...@apache.org Authored: Thu Mar 19 20:09:29 2015 + Committer: Benedict Elliott Smith bened...@apache.org Committed: Thu Mar 19 20:09:29 2015 + -- CHANGES.txt| 1 + .../org/apache/cassandra/io/sstable/SSTableReader.java | 8 .../apache/cassandra/io/sstable/IndexSummaryTest.java | 13 ++--- 3 files changed, 15 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8276aa5/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3f96330..14a45a3 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.4 + * IndexSummary effectiveIndexInterval is now a guideline, not a rule (CASSANDRA-8993) * Use correct bounds for page cache eviction of compressed files (CASSANDRA-8746) * SSTableScanner enforces its bounds (CASSANDRA-8946) * Cleanup cell equality (CASSANDRA-8947) http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8276aa5/src/java/org/apache/cassandra/io/sstable/SSTableReader.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java index e4a6e85..2bc32d3 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableReader.java @@ -1083,7 +1083,7 @@ public class SSTableReader extends SSTable implements SelfRefCountedSSTableRead return referencedIndexSummary.getPosition(getIndexSummaryIndexFromBinarySearchResult(binarySearchResult)); } -private static int getIndexSummaryIndexFromBinarySearchResult(int binarySearchResult) +public static int getIndexSummaryIndexFromBinarySearchResult(int binarySearchResult) { if (binarySearchResult 0) { @@ -1466,12 +1466,12 @@ public class SSTableReader extends SSTable implements SelfRefCountedSSTableRead // of the next interval). int i = 0; IteratorFileDataInput segments = ifile.iterator(sampledPosition); -while (segments.hasNext() i = effectiveInterval) +while (segments.hasNext()) { FileDataInput in = segments.next(); try { -while (!in.isEOF() i = effectiveInterval) +while (!in.isEOF()) { i++; @@ -1481,7 +1481,7 @@ public class SSTableReader extends SSTable implements SelfRefCountedSSTableRead boolean exactMatch; // is the current position an exact match for the key, suitable for caching // Compare raw keys if possible for performance, otherwise compare decorated keys. -if (op == Operator.EQ) +if (op == Operator.EQ i = effectiveInterval) { opSatisfied = exactMatch = indexKey.equals(((DecoratedKey) key).getKey()); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/e8276aa5/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java -- diff --git a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java index 95183d4..0760aa3 100644 --- a/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java +++ b/test/unit/org/apache/cassandra/io/sstable/IndexSummaryTest.java @@ -222,13 +222,20 @@ public class IndexSummaryTest original.close(); } -private void testPosition(IndexSummary original, IndexSummary downsampled, IterableDecoratedKey keys) +private void testPosition(IndexSummary original, IndexSummary downsampled, ListDecoratedKey keys) { for (DecoratedKey key : keys) { long orig = SSTableReader.getIndexScanPositionFromBinarySearchResult(original.binarySearch(key), original); -long down = SSTableReader.getIndexScanPositionFromBinarySearchResult(downsampled.binarySearch(key),
[jira] [Commented] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370027#comment-14370027 ] Benedict commented on CASSANDRA-8993: - I've pushed the current mitigation, anyway EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8994) Make keyspace handling in FunctionName less fragile
Sylvain Lebresne created CASSANDRA-8994: --- Summary: Make keyspace handling in FunctionName less fragile Key: CASSANDRA-8994 URL: https://issues.apache.org/jira/browse/CASSANDRA-8994 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Sylvain Lebresne Priority: Minor Fix For: 3.0 {{FunctionName}} keyspace field can be null because it can be omitted by the user (in which case we search both the system keyspace and the one of the statement). The handling of that is imo a tad fragile: the {{equals}} method should typically probably complain if one of its operand has no keyspace since in that case, we can't really properly answer. The code currently work around that in {{Functions}} by avoiding {{equals}} when this matter but it would still be pretty easy to get that wrong, especially since {{FunctionName}} is used in maps. For instance, you could argue that {{Functions#find}} is broken if its argument has no keyspace (it happens to be only use when that's not the case, but again, pretty easy to get wrong in the future). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8917) Upgrading from 2.0.9 to 2.1.3 with 3 nodes, CL = quorum causes exceptions
[ https://issues.apache.org/jira/browse/CASSANDRA-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369321#comment-14369321 ] Gary Ogden commented on CASSANDRA-8917: --- We only have one node designated as the seed node. Should we increase that to two? Would that be a potential cause of this issue? Upgrading from 2.0.9 to 2.1.3 with 3 nodes, CL = quorum causes exceptions - Key: CASSANDRA-8917 URL: https://issues.apache.org/jira/browse/CASSANDRA-8917 Project: Cassandra Issue Type: Bug Environment: C* 2.0.9, Centos 6.5, Java 1.7.0_72, spring data cassandra 1.1.1, cassandra java driver 2.0.9 Reporter: Gary Ogden Attachments: b_output.log, jersey_error.log, node1-cassandra.yaml, node1-system.log, node2-cassandra.yaml, node2-system.log, node3-cassandra.yaml, node3-system.log We have java apps running on glassfish that read/write to our 3 node cluster running on 2.0.9. we have the CL set to quorum for all reads and writes. When we started to upgrade the first node and did the sstable upgrade on that node, we started getting this error on reads and writes: com.datastax.driver.core.exceptions.UnavailableException: Not enough replica available for query at consistency QUORUM (2 required but only 1 alive) How is that possible when we have 3 nodes total, and there was 2 that were up and it's saying we can't get the required CL? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8917) Upgrading from 2.0.9 to 2.1.3 with 3 nodes, CL = quorum causes exceptions
[ https://issues.apache.org/jira/browse/CASSANDRA-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Ogden updated CASSANDRA-8917: -- Attachment: node3-cassandra.yaml node2-cassandra.yaml node1-cassandra.yaml here's the cassandra.yaml files for all three nodes. Upgrading from 2.0.9 to 2.1.3 with 3 nodes, CL = quorum causes exceptions - Key: CASSANDRA-8917 URL: https://issues.apache.org/jira/browse/CASSANDRA-8917 Project: Cassandra Issue Type: Bug Environment: C* 2.0.9, Centos 6.5, Java 1.7.0_72, spring data cassandra 1.1.1, cassandra java driver 2.0.9 Reporter: Gary Ogden Attachments: b_output.log, jersey_error.log, node1-cassandra.yaml, node1-system.log, node2-cassandra.yaml, node2-system.log, node3-cassandra.yaml, node3-system.log We have java apps running on glassfish that read/write to our 3 node cluster running on 2.0.9. we have the CL set to quorum for all reads and writes. When we started to upgrade the first node and did the sstable upgrade on that node, we started getting this error on reads and writes: com.datastax.driver.core.exceptions.UnavailableException: Not enough replica available for query at consistency QUORUM (2 required but only 1 alive) How is that possible when we have 3 nodes total, and there was 2 that were up and it's saying we can't get the required CL? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7970) JSON support for CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-7970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369330#comment-14369330 ] Sylvain Lebresne commented on CASSANDRA-7970: - bq. It's just to assume the system keyspace if a keyspace isn't set on the function. Ok. I still think the whole dealing with keyspace in {{FunctionName}} is fragile but it's somewhat outside this ticket so I've created CASSANDRA-8994. bq. However, I'm not clear on what you're suggesting. Can you elaborate? So, {{AbstractType.fromJSONObject}} would return a {{Term}}. For basic types, it would be a {{Constants.Value}} but for say a list, it would be a {{Lists.Value}} (containing the unserialized collection). Then {{Json.ColumnValue}} would just be a {{Term.Raw}} (not a {{Term.Terminal}}) and it's {{prepare}} would return the result of {{fromJSONObject}}. The end result being that {{Lists.Appender.doAppend}} would always get a {{Lists.Value}} and we won't need {{Lists.getElementsFromValue}}. bq. We have this problem in other parts of the code as well Right, and we should fix those some day, but that's another story :) bq. Plus, the purpose of {{ExecutionException}} is pretty clear right now: it's for errors that occur while executing a user-defined function. It's meant for errors while executing functions in general, not necessarily user-defined ones and I'm absolutely convinced we'll have to use it in native functions sooner or later. And functions in general can be used in select clauses, in which case they'll error out during execution, not during validation, so we shouldn't use IRE for them (as mentioned by Jonathan in [that comment|https://issues.apache.org/jira/browse/CASSANDRA-5910?focusedCommentId=13746044page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13746044]). Now, you could say that {{FromJsonFct}} cannot currently be used in select clauses, but 1) I'm hoping we can fix that at some point and 2) I'd really prefer consider those JSON functions as normal functions as much as possible, so I do prefer saying those errors are errors during the execution of a function rather than type errors of a sort. Lastly, a micro nits: you could make {{JSON_IDENTIFIER}} public in {{Json.java}} and use it in {{Selection}}. JSON support for CQL Key: CASSANDRA-7970 URL: https://issues.apache.org/jira/browse/CASSANDRA-7970 Project: Cassandra Issue Type: New Feature Components: API Reporter: Jonathan Ellis Assignee: Tyler Hobbs Labels: client-impacting, cql3.3, docs-impacting Fix For: 3.0 Attachments: 7970-trunk-v1.txt JSON is popular enough that not supporting it is becoming a competitive weakness. We can add JSON support in a way that is compatible with our performance goals by *mapping* JSON to an existing schema: one JSON documents maps to one CQL row. Thus, it is NOT a goal to support schemaless documents, which is a misfeature [1] [2] [3]. Rather, it is to allow a convenient way to easily turn a JSON document from a service or a user into a CQL row, with all the validation that entails. Since we are not looking to support schemaless documents, we will not be adding a JSON data type (CASSANDRA-6833) a la postgresql. Rather, we will map the JSON to UDT, collections, and primitive CQL types. Here's how this might look: {code} CREATE TYPE address ( street text, city text, zip_code int, phones settext ); CREATE TABLE users ( id uuid PRIMARY KEY, name text, addresses maptext, address ); INSERT INTO users JSON {‘id’: 4b856557-7153, ‘name’: ‘jbellis’, ‘address’: {“home”: {“street”: “123 Cassandra Dr”, “city”: “Austin”, “zip_code”: 78747, “phones”: [2101234567]}}}; SELECT JSON id, address FROM users; {code} (We would also want to_json and from_json functions to allow mapping a single column's worth of data. These would not require extra syntax.) [1] http://rustyrazorblade.com/2014/07/the-myth-of-schema-less/ [2] https://blog.compose.io/schema-less-is-usually-a-lie/ [3] http://dl.acm.org/citation.cfm?id=2481247 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/2] cassandra git commit: Followup commit for 7816
Repository: cassandra Updated Branches: refs/heads/trunk b25adc765 - 4597bb5b1 Followup commit for 7816 patch by Stephania ; reviewed by tjake for CASSANDRA-7816 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/763130bd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/763130bd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/763130bd Branch: refs/heads/trunk Commit: 763130bdbde2f4cec2e8973bcd5203caf51cc89f Parents: 9b9dda6 Author: T Jake Luciani j...@apache.org Authored: Thu Mar 19 13:39:23 2015 -0400 Committer: T Jake Luciani j...@apache.org Committed: Thu Mar 19 13:41:20 2015 -0400 -- src/java/org/apache/cassandra/gms/EndpointState.java | 12 src/java/org/apache/cassandra/gms/Gossiper.java | 12 ++-- src/java/org/apache/cassandra/transport/Server.java | 11 +-- 3 files changed, 11 insertions(+), 24 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/763130bd/src/java/org/apache/cassandra/gms/EndpointState.java -- diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java b/src/java/org/apache/cassandra/gms/EndpointState.java index 74433ba..1029374 100644 --- a/src/java/org/apache/cassandra/gms/EndpointState.java +++ b/src/java/org/apache/cassandra/gms/EndpointState.java @@ -47,14 +47,12 @@ public class EndpointState /* fields below do not get serialized */ private volatile long updateTimestamp; private volatile boolean isAlive; -private volatile boolean hasPendingEcho; EndpointState(HeartBeatState initialHbState) { hbState = initialHbState; updateTimestamp = System.nanoTime(); isAlive = true; -hasPendingEcho = false; } HeartBeatState getHeartBeatState() @@ -116,16 +114,6 @@ public class EndpointState isAlive = false; } -public boolean hasPendingEcho() -{ -return hasPendingEcho; -} - -public void markPendingEcho(boolean val) -{ -hasPendingEcho = val; -} - public String toString() { return EndpointState: HeartBeatState = + hbState + , AppStateMap = + applicationState; http://git-wip-us.apache.org/repos/asf/cassandra/blob/763130bd/src/java/org/apache/cassandra/gms/Gossiper.java -- diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java b/src/java/org/apache/cassandra/gms/Gossiper.java index ac98c53..9c0ef8a 100644 --- a/src/java/org/apache/cassandra/gms/Gossiper.java +++ b/src/java/org/apache/cassandra/gms/Gossiper.java @@ -883,12 +883,6 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean return; } -if (localState.hasPendingEcho()) -{ -logger.debug({} has already a pending echo, skipping it, localState); -return; -} - localState.markDead(); MessageOutEchoMessage echoMessage = new MessageOutEchoMessage(MessagingService.Verb.ECHO, new EchoMessage(), EchoMessage.serializer); @@ -902,19 +896,17 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean public void response(MessageIn msg) { -localState.markPendingEcho(false); realMarkAlive(addr, localState); } }; -localState.markPendingEcho(true); MessagingService.instance().sendRR(echoMessage, addr, echoHandler); } private void realMarkAlive(final InetAddress addr, final EndpointState localState) { if (logger.isTraceEnabled()) -logger.trace(marking as alive {}, addr); +logger.trace(marking as alive {}, addr); localState.markAlive(); localState.updateTimestamp(); // prevents doStatusCheck from racing us and evicting if it was down aVeryLongTime liveEndpoints.add(addr); @@ -925,7 +917,7 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean for (IEndpointStateChangeSubscriber subscriber : subscribers) subscriber.onAlive(addr, localState); if (logger.isTraceEnabled()) -logger.trace(Notified + subscribers); +logger.trace(Notified + subscribers); } private void markDead(InetAddress addr, EndpointState localState) http://git-wip-us.apache.org/repos/asf/cassandra/blob/763130bd/src/java/org/apache/cassandra/transport/Server.java -- diff --git a/src/java/org/apache/cassandra/transport/Server.java b/src/java/org/apache/cassandra/transport/Server.java index f396fd9..8f0f89f
[2/2] cassandra git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Conflicts: src/java/org/apache/cassandra/gms/Gossiper.java src/java/org/apache/cassandra/transport/Server.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4597bb5b Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4597bb5b Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4597bb5b Branch: refs/heads/trunk Commit: 4597bb5b1687048a2dd57bfea679e987637c5fcc Parents: b25adc7 763130b Author: T Jake Luciani j...@apache.org Authored: Thu Mar 19 13:50:39 2015 -0400 Committer: T Jake Luciani j...@apache.org Committed: Thu Mar 19 13:50:39 2015 -0400 -- src/java/org/apache/cassandra/gms/EndpointState.java | 12 src/java/org/apache/cassandra/gms/Gossiper.java | 12 ++-- src/java/org/apache/cassandra/transport/Server.java | 11 +-- 3 files changed, 11 insertions(+), 24 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4597bb5b/src/java/org/apache/cassandra/gms/Gossiper.java -- diff --cc src/java/org/apache/cassandra/gms/Gossiper.java index d7b1ec7,9c0ef8a..ff1240a --- a/src/java/org/apache/cassandra/gms/Gossiper.java +++ b/src/java/org/apache/cassandra/gms/Gossiper.java @@@ -891,15 -883,9 +891,9 @@@ public class Gossiper implements IFailu return; } - if (localState.hasPendingEcho()) - { - logger.debug({} has already a pending echo, skipping it, localState); - return; - } - localState.markDead(); -MessageOutEchoMessage echoMessage = new MessageOutEchoMessage(MessagingService.Verb.ECHO, new EchoMessage(), EchoMessage.serializer); +MessageOutEchoMessage echoMessage = new MessageOutEchoMessage(MessagingService.Verb.ECHO, EchoMessage.instance, EchoMessage.serializer); logger.trace(Sending a EchoMessage to {}, addr); IAsyncCallback echoHandler = new IAsyncCallback() { http://git-wip-us.apache.org/repos/asf/cassandra/blob/4597bb5b/src/java/org/apache/cassandra/transport/Server.java -- diff --cc src/java/org/apache/cassandra/transport/Server.java index 6aa929b,8f0f89f..e6e0a8f --- a/src/java/org/apache/cassandra/transport/Server.java +++ b/src/java/org/apache/cassandra/transport/Server.java @@@ -22,8 -22,9 +22,10 @@@ import java.net.InetAddress import java.net.InetSocketAddress; import java.net.UnknownHostException; import java.util.EnumMap; + import java.util.Map; +import java.util.List; import java.util.concurrent.Callable; + import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicBoolean; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLEngine;
[jira] [Commented] (CASSANDRA-8991) CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does.
[ https://issues.apache.org/jira/browse/CASSANDRA-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369849#comment-14369849 ] Ulises Cervino Beresi commented on CASSANDRA-8991: -- Thanks for the tips. I've updated (and renamed) the patch attached. CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. - Key: CASSANDRA-8991 URL: https://issues.apache.org/jira/browse/CASSANDRA-8991 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ulises Cervino Beresi Assignee: Ulises Cervino Beresi Priority: Minor Fix For: 2.0.14 Attachments: CASSANDRA-2.0.13-8991.txt CQL3 DropIndexStatement should expose getColumnFamily like the CQL2 version does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-7816) Duplicate DOWN/UP Events Pushed with Native Protocol
[ https://issues.apache.org/jira/browse/CASSANDRA-7816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani resolved CASSANDRA-7816. --- Resolution: Fixed committed Duplicate DOWN/UP Events Pushed with Native Protocol Key: CASSANDRA-7816 URL: https://issues.apache.org/jira/browse/CASSANDRA-7816 Project: Cassandra Issue Type: Bug Components: API Reporter: Michael Penick Assignee: Stefania Priority: Minor Fix For: 2.1.4, 2.0.14 Attachments: 7816-v2.0.txt, tcpdump_repeating_status_change.txt, trunk-7816.txt Added MOVED_NODE as a possible type of topology change and also specified that it is possible to receive the same event multiple times. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Followup commit for 7816
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 9b9dda6bb - 763130bdb Followup commit for 7816 patch by Stephania ; reviewed by tjake for CASSANDRA-7816 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/763130bd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/763130bd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/763130bd Branch: refs/heads/cassandra-2.1 Commit: 763130bdbde2f4cec2e8973bcd5203caf51cc89f Parents: 9b9dda6 Author: T Jake Luciani j...@apache.org Authored: Thu Mar 19 13:39:23 2015 -0400 Committer: T Jake Luciani j...@apache.org Committed: Thu Mar 19 13:41:20 2015 -0400 -- src/java/org/apache/cassandra/gms/EndpointState.java | 12 src/java/org/apache/cassandra/gms/Gossiper.java | 12 ++-- src/java/org/apache/cassandra/transport/Server.java | 11 +-- 3 files changed, 11 insertions(+), 24 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/763130bd/src/java/org/apache/cassandra/gms/EndpointState.java -- diff --git a/src/java/org/apache/cassandra/gms/EndpointState.java b/src/java/org/apache/cassandra/gms/EndpointState.java index 74433ba..1029374 100644 --- a/src/java/org/apache/cassandra/gms/EndpointState.java +++ b/src/java/org/apache/cassandra/gms/EndpointState.java @@ -47,14 +47,12 @@ public class EndpointState /* fields below do not get serialized */ private volatile long updateTimestamp; private volatile boolean isAlive; -private volatile boolean hasPendingEcho; EndpointState(HeartBeatState initialHbState) { hbState = initialHbState; updateTimestamp = System.nanoTime(); isAlive = true; -hasPendingEcho = false; } HeartBeatState getHeartBeatState() @@ -116,16 +114,6 @@ public class EndpointState isAlive = false; } -public boolean hasPendingEcho() -{ -return hasPendingEcho; -} - -public void markPendingEcho(boolean val) -{ -hasPendingEcho = val; -} - public String toString() { return EndpointState: HeartBeatState = + hbState + , AppStateMap = + applicationState; http://git-wip-us.apache.org/repos/asf/cassandra/blob/763130bd/src/java/org/apache/cassandra/gms/Gossiper.java -- diff --git a/src/java/org/apache/cassandra/gms/Gossiper.java b/src/java/org/apache/cassandra/gms/Gossiper.java index ac98c53..9c0ef8a 100644 --- a/src/java/org/apache/cassandra/gms/Gossiper.java +++ b/src/java/org/apache/cassandra/gms/Gossiper.java @@ -883,12 +883,6 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean return; } -if (localState.hasPendingEcho()) -{ -logger.debug({} has already a pending echo, skipping it, localState); -return; -} - localState.markDead(); MessageOutEchoMessage echoMessage = new MessageOutEchoMessage(MessagingService.Verb.ECHO, new EchoMessage(), EchoMessage.serializer); @@ -902,19 +896,17 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean public void response(MessageIn msg) { -localState.markPendingEcho(false); realMarkAlive(addr, localState); } }; -localState.markPendingEcho(true); MessagingService.instance().sendRR(echoMessage, addr, echoHandler); } private void realMarkAlive(final InetAddress addr, final EndpointState localState) { if (logger.isTraceEnabled()) -logger.trace(marking as alive {}, addr); +logger.trace(marking as alive {}, addr); localState.markAlive(); localState.updateTimestamp(); // prevents doStatusCheck from racing us and evicting if it was down aVeryLongTime liveEndpoints.add(addr); @@ -925,7 +917,7 @@ public class Gossiper implements IFailureDetectionEventListener, GossiperMBean for (IEndpointStateChangeSubscriber subscriber : subscribers) subscriber.onAlive(addr, localState); if (logger.isTraceEnabled()) -logger.trace(Notified + subscribers); +logger.trace(Notified + subscribers); } private void markDead(InetAddress addr, EndpointState localState) http://git-wip-us.apache.org/repos/asf/cassandra/blob/763130bd/src/java/org/apache/cassandra/transport/Server.java -- diff --git a/src/java/org/apache/cassandra/transport/Server.java b/src/java/org/apache/cassandra/transport/Server.java index
[jira] [Commented] (CASSANDRA-8620) Bootstrap session hanging indefinitely
[ https://issues.apache.org/jira/browse/CASSANDRA-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369449#comment-14369449 ] Philip Thompson commented on CASSANDRA-8620: Okay, thank you. Several LCS bugs were fixed from 2.1.2 - 2.1.3, which may have resolved your issue. If you run into it again, please re-open this ticket. Bootstrap session hanging indefinitely -- Key: CASSANDRA-8620 URL: https://issues.apache.org/jira/browse/CASSANDRA-8620 Project: Cassandra Issue Type: Bug Environment: Debian 7, Oracle JDK 1.7.0_51, AWS + GCE Reporter: Adam Horwich Hi! We have been running a relatively small 2.1.2 cluster over 2 DCs for a few months with ~100GB load per node and a RF=3 and over the last few weeks have been trying to scale up capacity. We've been recently seeing scenarios in which the Bootstrap or Unbootstrap streaming process hangs indefinitely for one or more sessions on the receiver without stacktrace or exception. This does not happen every time, and we do not get into this state with the same sender every time. When the receiver is in a hung state, the following can be found in the thread dump: The Stream-IN thread for one or more sessions is blocked in the following state: Thread 24942: (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=186 (Compiled frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() @bci=42, line=2043 (Compiled frame) - java.util.concurrent.ArrayBlockingQueue.take() @bci=20, line=374 (Compiled frame) - org.apache.cassandra.streaming.compress.CompressedInputStream.read() @bci=31, line=89 (Compiled frame) - java.io.DataInputStream.readUnsignedShort() @bci=4, line=337 (Compiled frame) - org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort() @bci=4, line=140 (Compiled frame) - org.apache.cassandra.utils.ByteBufferUtil.readShortLength(java.io.DataInput) @bci=1, line=317 (Compiled frame) - org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(java.io.DataInput) @bci=2, line=327 (Compiled frame) - org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(java.io.DataInput) @bci=5, line=397 (Compiled frame) - org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(java.io.DataInput) @bci=2, line=381 (Compiled frame) - org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(java.io.DataInput, org.apache.cassandra.db.ColumnSerializer$Flag, int, org.apache.cassandra.io.sstable.Descriptor$Version) @bci=10, line=75 (Compiled frame) - org.apache.cassandra.db.AbstractCell$1.computeNext() @bci=25, line=52 (Compiled frame) - org.apache.cassandra.db.AbstractCell$1.computeNext() @bci=1, line=46 (Compiled frame) - com.google.common.collect.AbstractIterator.tryToComputeNext() @bci=9, line=143 (Compiled frame) - com.google.common.collect.AbstractIterator.hasNext() @bci=61, line=138 (Compiled frame) - org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(org.apache.cassandra.db.DecoratedKey, org.apache.cassandra.config.CFMetaData, java.io.DataInput, org.apache.cassandra.io.sstable.Descriptor$Version) @bci=320, line=283 (Compiled frame) - org.apache.cassandra.streaming.StreamReader.writeRow(org.apache.cassandra.io.sstable.SSTableWriter, java.io.DataInput, org.apache.cassandra.db.ColumnFamilyStore) @bci=26, line=157 (Compiled frame) - org.apache.cassandra.streaming.compress.CompressedStreamReader.read(java.nio.channels.ReadableByteChannel) @bci=258, line=89 (Compiled frame) - org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=69, line=48 (Interpreted frame) - org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=4, line=38 (Interpreted frame) - org.apache.cassandra.streaming.messages.StreamMessage.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=37, line=55 (Interpreted frame) - org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run() @bci=24, line=245 (Interpreted frame) - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame) Debug logging shows that the receiver is still reading the file it is receiving from the receiver and has not yet sent an ACK. The receiver is waiting for more data to finish writing its row, and the sender is not sending any more data. On both the receiver
[jira] [Resolved] (CASSANDRA-8620) Bootstrap session hanging indefinitely
[ https://issues.apache.org/jira/browse/CASSANDRA-8620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-8620. Resolution: Cannot Reproduce Bootstrap session hanging indefinitely -- Key: CASSANDRA-8620 URL: https://issues.apache.org/jira/browse/CASSANDRA-8620 Project: Cassandra Issue Type: Bug Environment: Debian 7, Oracle JDK 1.7.0_51, AWS + GCE Reporter: Adam Horwich Hi! We have been running a relatively small 2.1.2 cluster over 2 DCs for a few months with ~100GB load per node and a RF=3 and over the last few weeks have been trying to scale up capacity. We've been recently seeing scenarios in which the Bootstrap or Unbootstrap streaming process hangs indefinitely for one or more sessions on the receiver without stacktrace or exception. This does not happen every time, and we do not get into this state with the same sender every time. When the receiver is in a hung state, the following can be found in the thread dump: The Stream-IN thread for one or more sessions is blocked in the following state: Thread 24942: (state = BLOCKED) - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise) - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=186 (Compiled frame) - java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await() @bci=42, line=2043 (Compiled frame) - java.util.concurrent.ArrayBlockingQueue.take() @bci=20, line=374 (Compiled frame) - org.apache.cassandra.streaming.compress.CompressedInputStream.read() @bci=31, line=89 (Compiled frame) - java.io.DataInputStream.readUnsignedShort() @bci=4, line=337 (Compiled frame) - org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort() @bci=4, line=140 (Compiled frame) - org.apache.cassandra.utils.ByteBufferUtil.readShortLength(java.io.DataInput) @bci=1, line=317 (Compiled frame) - org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(java.io.DataInput) @bci=2, line=327 (Compiled frame) - org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(java.io.DataInput) @bci=5, line=397 (Compiled frame) - org.apache.cassandra.db.composites.AbstractCType$Serializer.deserialize(java.io.DataInput) @bci=2, line=381 (Compiled frame) - org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(java.io.DataInput, org.apache.cassandra.db.ColumnSerializer$Flag, int, org.apache.cassandra.io.sstable.Descriptor$Version) @bci=10, line=75 (Compiled frame) - org.apache.cassandra.db.AbstractCell$1.computeNext() @bci=25, line=52 (Compiled frame) - org.apache.cassandra.db.AbstractCell$1.computeNext() @bci=1, line=46 (Compiled frame) - com.google.common.collect.AbstractIterator.tryToComputeNext() @bci=9, line=143 (Compiled frame) - com.google.common.collect.AbstractIterator.hasNext() @bci=61, line=138 (Compiled frame) - org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(org.apache.cassandra.db.DecoratedKey, org.apache.cassandra.config.CFMetaData, java.io.DataInput, org.apache.cassandra.io.sstable.Descriptor$Version) @bci=320, line=283 (Compiled frame) - org.apache.cassandra.streaming.StreamReader.writeRow(org.apache.cassandra.io.sstable.SSTableWriter, java.io.DataInput, org.apache.cassandra.db.ColumnFamilyStore) @bci=26, line=157 (Compiled frame) - org.apache.cassandra.streaming.compress.CompressedStreamReader.read(java.nio.channels.ReadableByteChannel) @bci=258, line=89 (Compiled frame) - org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=69, line=48 (Interpreted frame) - org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=4, line=38 (Interpreted frame) - org.apache.cassandra.streaming.messages.StreamMessage.deserialize(java.nio.channels.ReadableByteChannel, int, org.apache.cassandra.streaming.StreamSession) @bci=37, line=55 (Interpreted frame) - org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run() @bci=24, line=245 (Interpreted frame) - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame) Debug logging shows that the receiver is still reading the file it is receiving from the receiver and has not yet sent an ACK. The receiver is waiting for more data to finish writing its row, and the sender is not sending any more data. On both the receiver and sender there is a large amount of data (~5MB) stuck in the Recv-Q (receiver) and Send-Q (sender). We've been trying to diagnose this issue internally, but it's difficult
[jira] [Updated] (CASSANDRA-8845) sorted CQLSSTableWriter accept unsorted clustering keys
[ https://issues.apache.org/jira/browse/CASSANDRA-8845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-8845: --- Assignee: Carl Yeksigian sorted CQLSSTableWriter accept unsorted clustering keys --- Key: CASSANDRA-8845 URL: https://issues.apache.org/jira/browse/CASSANDRA-8845 Project: Cassandra Issue Type: Bug Reporter: Pierre N. Assignee: Carl Yeksigian Fix For: 2.1.4 Attachments: TestSorted.java The javadoc says : {quote} The SSTable sorted order means that rows are added such that their partition key respect the partitioner order and for a given partition, that *the rows respect the clustering columns order*. public Builder sorted() {quote} It throw an ex when partition key are in incorrect order, however, it doesn't throw an ex when rows are inserted with incorrect clustering keys order. It buffer them and sort them in correct order. {code} writer.addRow(1, 3); writer.addRow(1, 1); writer.addRow(1, 2); {code} {code} $ sstable2json sorted/ks/t1/ks-t1-ka-1-Data.db [ {key: 1, cells: [[\u\u\u\u0001:,,1424524149557000], [\u\u\u\u0002:,,1424524149557000], [\u\u\u\u0003:,,142452414955]]} ] {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-8665) Cassandra does not start with NPE in ColumnFamilyStore.removeUnfinishedCompactionLeftovers
[ https://issues.apache.org/jira/browse/CASSANDRA-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson resolved CASSANDRA-8665. Resolution: Cannot Reproduce Fix Version/s: (was: 2.1.4) 2.1.3 Okay, please re-open if it comes back up. Cassandra does not start with NPE in ColumnFamilyStore.removeUnfinishedCompactionLeftovers -- Key: CASSANDRA-8665 URL: https://issues.apache.org/jira/browse/CASSANDRA-8665 Project: Cassandra Issue Type: Bug Environment: Ubuntu 12.04 | C* 2.1.2 | ruby-driver 1.2 Reporter: Kishan Karunaratne Fix For: 2.1.3 During a ruby driver endurance/duration test, the following error occurred: {noformat} /mnt/systemlogs/system.log:ERROR [main] 2015-01-17 21:18:25,780 CassandraDaemon.java:482 - Exception encountered during startup /mnt/systemlogs/system.log-java.lang.NullPointerException: null /mnt/systemlogs/system.log- at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:573) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:249) [main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:465) [main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:554) [main/:na] {noformat} Here is the system.log leading up to the error: {noformat} INFO [main] 2015-01-17 21:18:24,581 ColumnFamilyStore.java:278 - Initializing system.peers INFO [SSTableBatchOpen:1] 2015-01-17 21:18:24,593 SSTableReader.java:392 - Opening /srv/performance/cass/data/system/peers-37f71aca7dc2383ba70672528af04d4f/system-peers-ka-169 (10533 bytes) INFO [SSTableBatchOpen:1] 2015-01-17 21:18:24,597 SSTableReader.java:392 - Opening /srv/performance/cass/data/system/peers-37f71aca7dc2383ba70672528af04d4f/system-peers-ka-171 (10572 bytes) INFO [SSTableBatchOpen:1] 2015-01-17 21:18:24,598 SSTableReader.java:392 - Opening /srv/performance/cass/data/system/peers-37f71aca7dc2383ba70672528af04d4f/system-peers-ka-170 (10581 bytes) INFO [main] 2015-01-17 21:18:24,609 ColumnFamilyStore.java:278 - Initializing system.local INFO [SSTableBatchOpen:1] 2015-01-17 21:18:24,613 SSTableReader.java:392 - Opening /srv/performance/cass/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-679 (5257 bytes) INFO [SSTableBatchOpen:1] 2015-01-17 21:18:24,616 SSTableReader.java:392 - Opening /srv/performance/cass/data/system/local-7ad54392bcdd35a684174e047860b377/system-local-ka-678 (5679 bytes) {noformat} Cassandra attempted to restart twice unsuccessfully (this error occurred twice) and then gave up; it seems like a corrupt Data.db file? The endurance test consists of a chaos rhino which randomly rolling restarts a node. The only other significant error is TombstoneOverwhelmingException and is probably unrelated: {noformat} /mnt/systemlogs/system.log:ERROR [HintedHandoff:2] 2015-01-17 12:46:32,378 SliceQueryFilter.java:218 - Scanned over 10 tombstones in system.hints; query aborted (see tombstone_failure_threshold) /mnt/systemlogs/system.log:ERROR [HintedHandoff:2] 2015-01-17 12:46:32,416 CassandraDaemon.java:170 - Exception in thread Thread[HintedHandoff:2,1,main] /mnt/systemlogs/system.log-org.apache.cassandra.db.filter.TombstoneOverwhelmingException: null /mnt/systemlogs/system.log- at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:220) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:107) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:81) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:69) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:320) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:62) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1885) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1693) ~[main/:na] /mnt/systemlogs/system.log- at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:378) ~[main/:na]
[jira] [Commented] (CASSANDRA-6432) Calculate estimated Cql row count per token range
[ https://issues.apache.org/jira/browse/CASSANDRA-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369440#comment-14369440 ] Philip Thompson commented on CASSANDRA-6432: No, 7688 only gives estimated partition count per token range, so I'll leave this open as it still seems applicable. Calculate estimated Cql row count per token range - Key: CASSANDRA-6432 URL: https://issues.apache.org/jira/browse/CASSANDRA-6432 Project: Cassandra Issue Type: Bug Components: Hadoop Reporter: Alex Liu Fix For: 3.0, 2.1.4, 2.0.14 CASSANDRA-6311 use the client side to calculate actual CF row count for hadoop job. We need fix it by using Cql row count, which need estimated Cql row count per token range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-6432) Calculate estimated Cql row count per token range
[ https://issues.apache.org/jira/browse/CASSANDRA-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-6432: --- Since Version: 2.0.7 Fix Version/s: 2.1.4 3.0 Calculate estimated Cql row count per token range - Key: CASSANDRA-6432 URL: https://issues.apache.org/jira/browse/CASSANDRA-6432 Project: Cassandra Issue Type: Bug Components: Hadoop Reporter: Alex Liu Fix For: 3.0, 2.1.4, 2.0.14 CASSANDRA-6311 use the client side to calculate actual CF row count for hadoop job. We need fix it by using Cql row count, which need estimated Cql row count per token range. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8642) Cassandra crashed after stress test of write
[ https://issues.apache.org/jira/browse/CASSANDRA-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369463#comment-14369463 ] Philip Thompson commented on CASSANDRA-8642: [~benedict] does this seem like it would have been fixed by your changes for 2.1.3? Or is it worth having someone from test try to repro with YCSB Cassandra crashed after stress test of write Key: CASSANDRA-8642 URL: https://issues.apache.org/jira/browse/CASSANDRA-8642 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.1.2, single node cluster, Ubuntu, 8 core CPU, 16GB memory (heapsize 8G), Vmware virtual machine. Reporter: ZhongYu Fix For: 2.1.4 Attachments: QQ拼音截图未命名.png When I am perform stress test of write using YCSB, Cassandra crashed. I look at the logs, and here are the last and only log: {code} WARN [SharedPool-Worker-25] 2015-01-18 17:35:16,611 AbstractTracingAwareExecutorService.java:169 - Uncaught exception on thread Thread[SharedPool-Worker-25,5,main]: {} java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code at org.apache.cassandra.utils.concurrent.OpOrder$Group.isBlockingSignal(OpOrder.java:302) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.MemtableAllocator$SubAllocator.allocate(MemtableAllocator.java:177) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.SlabAllocator.allocate(SlabAllocator.java:82) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.ContextAllocator.allocate(ContextAllocator.java:57) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.ContextAllocator.clone(ContextAllocator.java:47) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.utils.memory.MemtableBufferAllocator.clone(MemtableBufferAllocator.java:61) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Memtable.put(Memtable.java:174) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.ColumnFamilyStore.apply(ColumnFamilyStore.java:1126) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:388) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:351) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.db.Mutation.apply(Mutation.java:214) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.service.StorageProxy$7.runMayThrow(StorageProxy.java:999) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2117) ~[apache-cassandra-2.1.2.jar:2.1.2] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_71] at org.apache.cassandra.concurrent.AbstractTracingAwareExecutorService$FutureTask.run(AbstractTracingAwareExecutorService.java:164) ~[apache-cassandra-2.1.2.jar:2.1.2] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) [apache-cassandra-2.1.2.jar:2.1.2] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71]{code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8137) Prepared statement size overflow error
[ https://issues.apache.org/jira/browse/CASSANDRA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-8137: --- Assignee: Benjamin Lerer Prepared statement size overflow error -- Key: CASSANDRA-8137 URL: https://issues.apache.org/jira/browse/CASSANDRA-8137 Project: Cassandra Issue Type: Bug Environment: Linux Mint 64 | C* 2.1.0 | Ruby-driver master Reporter: Kishan Karunaratne Assignee: Benjamin Lerer Fix For: 2.1.4 When using C* 2.1.0 and Ruby-driver master, I get the following error when running the Ruby duration test (which prepares a lot of statements, in many threads): {noformat} Prepared statement of size 4451848 bytes is larger than allowed maximum of 2027520 bytes. Prepared statement of size 4434568 bytes is larger than allowed maximum of 2027520 bytes. {noformat} They usually occur in batches of 1, but sometimes in multiples as seen above. It happens occasionally, around 20% of the time when running the code. Unfortunately I don't have a stacktrace as the error isn't recorded in the system log. This is my schema, and the offending prepare statement: {noformat} @session.execute(CREATE TABLE duration_test.ints ( key INT, copy INT, value INT, PRIMARY KEY (key, copy)) ) {noformat} {noformat} select = @session.prepare(SELECT * FROM ints WHERE key=?) {noformat} Now, I notice that if I explicitly specify the keyspace in the prepare, I don't get the error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column
[ https://issues.apache.org/jira/browse/CASSANDRA-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369446#comment-14369446 ] Philip Thompson commented on CASSANDRA-8985: Do you know what query you are performing that causes the exception to occur? java.lang.AssertionError: Added column does not sort as the last column --- Key: CASSANDRA-8985 URL: https://issues.apache.org/jira/browse/CASSANDRA-8985 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.13 OracleJDK1.7 Debian 7.8 Reporter: Maxim Assignee: Tyler Hobbs Fix For: 2.0.14 After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error: {code}ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java (line 199) Exception in thread Thread[ReadStage:1823,5,main] java.lang.AssertionError: Added column does not sort as the last column at org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116) at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121) at org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115) at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211) at org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290) at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792) at org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54) at org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:551) at org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1755) at org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135) at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745){code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8985) java.lang.AssertionError: Added column does not sort as the last column
[ https://issues.apache.org/jira/browse/CASSANDRA-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-8985: --- Assignee: Tyler Hobbs java.lang.AssertionError: Added column does not sort as the last column --- Key: CASSANDRA-8985 URL: https://issues.apache.org/jira/browse/CASSANDRA-8985 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.13 OracleJDK1.7 Debian 7.8 Reporter: Maxim Assignee: Tyler Hobbs Fix For: 2.0.14 After upgrade Cassandra from 2.0.12 to 2.0.13 I begin to receive an error: {code}ERROR [ReadStage:1823] 2015-03-18 09:03:27,091 CassandraDaemon.java (line 199) Exception in thread Thread[ReadStage:1823,5,main] java.lang.AssertionError: Added column does not sort as the last column at org.apache.cassandra.db.ArrayBackedSortedColumns.addColumn(ArrayBackedSortedColumns.java:116) at org.apache.cassandra.db.ColumnFamily.addColumn(ColumnFamily.java:121) at org.apache.cassandra.db.ColumnFamily.addIfRelevant(ColumnFamily.java:115) at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:211) at org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.prune(ExtendedFilter.java:290) at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1792) at org.apache.cassandra.db.index.keys.KeysSearcher.search(KeysSearcher.java:54) at org.apache.cassandra.db.index.SecondaryIndexManager.search(SecondaryIndexManager.java:551) at org.apache.cassandra.db.ColumnFamilyStore.search(ColumnFamilyStore.java:1755) at org.apache.cassandra.db.RangeSliceCommand.executeLocally(RangeSliceCommand.java:135) at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:39) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:62) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745){code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7814) enable describe on indices
[ https://issues.apache.org/jira/browse/CASSANDRA-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369459#comment-14369459 ] Jonathan Ellis commented on CASSANDRA-7814: --- Can we drop the TABLE/INDEX/FUNCTION and just make a plain DESCRIBE return the appropriate definition? (If we have separate namespaces for all three, I suppose we may need to retain these optionally in case of ambiguity.) enable describe on indices -- Key: CASSANDRA-7814 URL: https://issues.apache.org/jira/browse/CASSANDRA-7814 Project: Cassandra Issue Type: Improvement Components: Core Reporter: radha Assignee: Stefania Priority: Minor Fix For: 3.0 Describe index should be supported, right now, the only way is to export the schema and find what it really is before updating/dropping the index. verified in [cqlsh 3.1.8 | Cassandra 1.2.18.1 | CQL spec 3.0.0 | Thrift protocol 19.36.2] -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-8928) Add downgradesstables
[ https://issues.apache.org/jira/browse/CASSANDRA-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-8928. --- Resolution: Later Given that we're already signing up for a big testing effort around 3.0+, let's take this off the table for now. Add downgradesstables - Key: CASSANDRA-8928 URL: https://issues.apache.org/jira/browse/CASSANDRA-8928 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Jeremy Hanna Priority: Minor As mentioned in other places such as CASSANDRA-8047 and in the wild, sometimes you need to go back. A downgrade sstables utility would be nice for a lot of reasons and I don't know that supporting going back to the previous major version format would be too much code since we already support reading the previous version. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5174) expose nodetool scrub for 2Is
[ https://issues.apache.org/jira/browse/CASSANDRA-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369929#comment-14369929 ] Yuki Morishita commented on CASSANDRA-5174: --- * I think we should log only when we are rebuilding index, and change log level to WARN, since default exception handler will log in ERROR anyway. (https://github.com/stef1927/cassandra/blob/5174/src/java/org/apache/cassandra/db/ColumnFamilyStore.java#L1403) * Why don't you handle IllegalArgumentException in its own catch block? (https://github.com/stef1927/cassandra/blob/5174/src/java/org/apache/cassandra/tools/NodeTool.java#L1285) Otherwise, +1. dtest also looks good except coding styles (docstring, string format) but I let QA team take a look when you do pull request. expose nodetool scrub for 2Is - Key: CASSANDRA-5174 URL: https://issues.apache.org/jira/browse/CASSANDRA-5174 Project: Cassandra Issue Type: Task Components: Core, Tools Reporter: Jason Brown Assignee: Stefania Priority: Minor Fix For: 3.0 Continuation of CASSANDRA-4464, where many other nodetool operations were added for 2Is. This ticket supports scrub fo 2Is and is in its own ticket due to the riskiness of deleting data on a bad bug. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8988) Optimise IntervalTree
[ https://issues.apache.org/jira/browse/CASSANDRA-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369940#comment-14369940 ] Benedict commented on CASSANDRA-8988: - I've pushed an update with your first two comments addressed. As to the potential regression, let's discuss it a little first. I must admit I'm finding it a little difficult to model mentally - interval trees aren't one of my fortes. I'm not wed to the change I've made, but it seems to me it is an optimisation. AFAICT there should be up to 2.lg(N) invocations of these unidirectional paths, so by removing the preliminary test we remove O(lg(N)) comparisons. Now only two of these comparisons can fail, the others are wasted. They each permit saving one final scan. This scan was previously O(M) in #intersections, but is now O(lg(M)) down to your suggestion, so we're trading O(lg(N)) and getting O(lg(M)). This result also gives that these unidirectional paths are now O(lg(N).lg(M)) in cost, when previously they were O(lg(N).M), so the lg(N) and lg(M) costs we are trading are algorithmically not very important. But any downside risk occurs only when M is large, and we've reduced the algorithmically complexity here to save us more than we lose. _Typically_ N should be greater than M, though, so we're saving here too - except in situations where the entire contents are essentially overlapped, in which case most of the M will occur in a node that spans a large enough range that it answers all queries without hitting one of the unidirectional branches (at least, the main body of M will not be present in one of the branches). So the only situation we are risking is a very weird border case where there is a huge M node high in the tree that is reached by directly crossing the search interval boundary. Every other situation we're improving, I think? Optimise IntervalTree - Key: CASSANDRA-8988 URL: https://issues.apache.org/jira/browse/CASSANDRA-8988 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Trivial Fix For: 2.1.4 Attachments: 8988.txt We perform a lot of unnecessary comparisons in IntervalTree.IntervalNode.searchInternal. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8993) EffectiveIndexInterval calculation is incorrect
[ https://issues.apache.org/jira/browse/CASSANDRA-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14369903#comment-14369903 ] Tyler Hobbs commented on CASSANDRA-8993: I'm okay with the patch to mitigate incorrect calculations, but at the moment I'm at a loss as to where the calculation is going wrong. We have pretty thorough test coverage around downsampling (especially with the modifications to the test in your patch). I can only guess that it may be some interaction between downsampling and early opening. We don't have a way to reproduce yet, correct? Do you think extending the SSTableRewriterTests with checks on {{getPosition(..., EQ)}} and downsampling of index summaries would yield something? EffectiveIndexInterval calculation is incorrect --- Key: CASSANDRA-8993 URL: https://issues.apache.org/jira/browse/CASSANDRA-8993 Project: Cassandra Issue Type: Bug Components: Core Reporter: Benedict Assignee: Benedict Priority: Blocker Fix For: 2.1.4 Attachments: 8993.txt I'm not familiar enough with the calculation itself to understand why this is happening, but see discussion on CASSANDRA-8851 for the background. I've introduced a test case to look for this during downsampling, but it seems to pass just fine, so it may be an artefact of upgrading. The problem was, unfortunately, not manifesting directly because it would simply result in a failed lookup. This was only exposed when early opening used firstKeyBeyond, which does not use the effective interval, and provided the result to getPosition(). I propose a simple fix that ensures a bug here cannot break correctness. Perhaps [~thobbs] can follow up with an investigation as to how it actually went wrong? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-3852) use LIFO queueing policy when queue size exceeds thresholds
[ https://issues.apache.org/jira/browse/CASSANDRA-3852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-3852: Assignee: (was: Benedict) use LIFO queueing policy when queue size exceeds thresholds --- Key: CASSANDRA-3852 URL: https://issues.apache.org/jira/browse/CASSANDRA-3852 Project: Cassandra Issue Type: Improvement Reporter: Peter Schuller Labels: performance Fix For: 3.1 A strict FIFO policy for queueing (between stages) is detrimental to latency and forward progress. Whenever a node is saturated beyond incoming request rate, *all* requests become slow. If it is consistently saturated, you start effectively timing out on *all* requests. A much better strategy from the point of view of latency is to serve a subset requests quickly, and letting some time out, rather than letting all either time out or be slow. Care must be taken such that: * We still guarantee that requests are processed reasonably timely (we couldn't go strict LIFO for example as that would result in requests getting stuck potentially forever on a loaded node). * Maybe, depending on the previous point's solution, ensure that some requests bypass the policy and get prioritized (e.g., schema migrations, or anything internal to a node). A possible implementation is to go LIFO whenever there are requests in the queue that are older than N milliseconds (or a certain queue size, etc). Benefits: * All cases where the client is directly, or is indirectly affecting through other layers, a system which has limited concurrency (e.g., thread pool size of X to serve some incoming request rate), it is *much* better for a few requests to time out while most are serviced quickly, than for all requests to become slow, as it doesn't explode concurrency. Think any random non-super-advanced php app, ruby web app, java servlet based app, etc. Essentially, it optimizes very heavily for improved average latencies. * Systems with strict p95/p99/p999 requirements on latencies should greatly benefit from such a policy. For example, suppose you have a system at 85% of capacity, and it takes a write spike (or has a hiccup like GC pause, blocking on a commit log write, etc). Suppose the hiccup racks up 500 ms worth of requests. At 15% margin at steady state, that takes 500ms * 100/15 = 3.2 seconds to recover. Instead of *all* requests for an entire 3.2 second window being slow, we'd serve requests quickly for 2.7 of those seconds, with the incoming requests during that 500 ms interval being the ones primarily affected. The flip side though is that once you're at the point where more than N percent of requests end up having to wait for others to take LIFO priority, the p(100-N) latencies will actually be *worse* than without this change (but at this point you have to consider what the root reason for those pXX requirements are). * In the case of complete saturation, it allows forward progress. Suppose you're taking 25% more traffic than you are able to handle. Instead of getting backed up and ending up essentially timing out *every single request*, you will succeed in processing up to 75% of them (I say up to because it depends; for example on a {{QUORUM}} request you need at least two of the requests from the co-ordinator to succeed so the percentage is brought down) and allowing clients to make forward progress and get work done, rather than being stuck. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370077#comment-14370077 ] Jeff Jirsa edited comment on CASSANDRA-5791 at 3/19/15 8:43 PM: Cause of tests failing is that checksums are incorrect for compressed sstables again. {noformat} # cat /Users/jeff/.ccm/snapshot/node1/data/test2/metrics-aded07e0ce7711e4897c85b755fc16c4/la-1-big-Digest.adler32 822598308 # java AdlerCheckSum /Users/jeff/.ccm/snapshot/node1/data/test2/metrics-aded07e0ce7711e4897c85b755fc16c4/la-1-big-Data.db 864477438 {noformat} The checksums should have been corrected by CASSANDRA-8778 so I'll figure out where the regression happened tonight after business hours PST. was (Author: jjirsa): Cause of tests failing is that checksums are incorrect for compressed sstables again. {noformat} # cat /Users/jeff/.ccm/snapshot/node1/data/test2/metrics-aded07e0ce7711e4897c85b755fc16c4/la-1-big-Digest.adler32 822598308 # java AdlerCheckSum /Users/jeff/.ccm/snapshot/node1/data/test2/metrics-aded07e0ce7711e4897c85b755fc16c4/la-1-big-Data.db 864477438 {/oformat} The checksums should have been corrected by CASSANDRA-8778 so I'll figure out where the regression happened tonight after business hours PST. A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli Assignee: Jeff Jirsa Priority: Minor Fix For: 3.0 Attachments: cassandra-5791-patch-3.diff, cassandra-5791.patch-2 CUrrently there is no nodetool command to validate all sstables on disk. The only way to do this is to run a repair and see if it succeeds. But we cannot repair the system keyspace. Also we can run upgrade sstables but that re writes all the sstables. This command should check the hash of all sstables and return whether all data is readable all not. This should NOT care about consistency. The compressed sstables do not have hash so not sure how it will work there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-5791) A nodetool command to validate all sstables in a node
[ https://issues.apache.org/jira/browse/CASSANDRA-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370077#comment-14370077 ] Jeff Jirsa commented on CASSANDRA-5791: --- Cause of tests failing is that checksums are incorrect for compressed sstables again. {noformat} # cat /Users/jeff/.ccm/snapshot/node1/data/test2/metrics-aded07e0ce7711e4897c85b755fc16c4/la-1-big-Digest.adler32 822598308 # java AdlerCheckSum /Users/jeff/.ccm/snapshot/node1/data/test2/metrics-aded07e0ce7711e4897c85b755fc16c4/la-1-big-Data.db 864477438 {/oformat} The checksums should have been corrected by CASSANDRA-8778 so I'll figure out where the regression happened tonight after business hours PST. A nodetool command to validate all sstables in a node - Key: CASSANDRA-5791 URL: https://issues.apache.org/jira/browse/CASSANDRA-5791 Project: Cassandra Issue Type: New Feature Components: Core Reporter: sankalp kohli Assignee: Jeff Jirsa Priority: Minor Fix For: 3.0 Attachments: cassandra-5791-patch-3.diff, cassandra-5791.patch-2 CUrrently there is no nodetool command to validate all sstables on disk. The only way to do this is to run a repair and see if it succeeds. But we cannot repair the system keyspace. Also we can run upgrade sstables but that re writes all the sstables. This command should check the hash of all sstables and return whether all data is readable all not. This should NOT care about consistency. The compressed sstables do not have hash so not sure how it will work there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8996) dtests should pass on trunk
Ariel Weisberg created CASSANDRA-8996: - Summary: dtests should pass on trunk Key: CASSANDRA-8996 URL: https://issues.apache.org/jira/browse/CASSANDRA-8996 Project: Cassandra Issue Type: Task Reporter: Ariel Weisberg Assignee: Michael Shuler Not having the dtests reporting that they pass make it non-obvious when a new one breaks. Either fix the tests so that they pass or exclude the known failures from success criteria. For excluded tests, make sure there is a JIRA covering them so we can make sure someone is following up shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-8997) Bootstrap dtests so they pass on trunk
Ariel Weisberg created CASSANDRA-8997: - Summary: Bootstrap dtests so they pass on trunk Key: CASSANDRA-8997 URL: https://issues.apache.org/jira/browse/CASSANDRA-8997 Project: Cassandra Issue Type: Task Reporter: Ariel Weisberg Assignee: Michael Shuler Get to passing as soon as possible by excluding failing tests so that we can have a history of successful runs and track new regressions and make it obvious when there are flapping tests. -- This message was sent by Atlassian JIRA (v6.3.4#6332)