[jira] [Updated] (CASSANDRA-9104) Unit test failures, trunk + Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-9104: --- Attachment: 9104_RecoveryManager_v2.txt 9104_KeyCache_ScrubTest_v2.txt h6. KeyCacheTest / ScrubTest Revised. I agree that promoting the disabling of early re-open on Windows makes it less brittle; I was focused on addressing the utest failure and recently promoted disabling of early re-open on Windows in SSTRW already. Good call. bq. Is this the right bail-out check? What happens if preemptive open is enabled, but has a large interval that wasn't reached yet? Not sure I follow - by checking for the interval being Long.MAX_VALUE, we're checking to see whether or not early re-open is disabled. This change should have no impact on cases where preemptive open is enabled with a high value as that will still be less than Long.MAX_VALUE and should process the abort.moveStarts call as before. bq. Could we also add the Linux test you did, using SSTableRewriter.overrideOpenInterval? Added a noEarlyOpen test. Attached KeyCache and Scrub as single patch as they're touching similar components. h6. RecoveryTest On first run of the CLSM, regardless of the flag a new segment will be created: {noformat} if (availableSegments.isEmpty() (activeSegments.isEmpty() || createReserveSegments)) {noformat} If availableSegments and activeSegments are both empty (i.e. on startup or after utest clear), a new segment will be created and potentially race with the recover() process. This problem is arising in unit-tests specifically as there's no buffer between starting the CLSM and recovering the files whereas in CassandraDaemon.java, we go through quite a bit between static init/start of CLSM and the recover call. I've added a new method to register w/the signal in CLSM for available segments and added that to CommitLog.recover as this resolves the unit test issues and gives us more robust protection against this race in regular startup rather than relying on CassandraDaemon.setup taking long enough to get the activeSegment into the CLQ. Along with the above, I adjusted the various unsafe start/stop methods to both avoid calling wakeManager and adjust the reserve flag as appropriate. Unit test failures, trunk + Windows --- Key: CASSANDRA-9104 URL: https://issues.apache.org/jira/browse/CASSANDRA-9104 Project: Cassandra Issue Type: Test Reporter: Joshua McKenzie Assignee: Joshua McKenzie Labels: Windows Fix For: 3.0 Attachments: 9104_CFSTest.txt, 9104_KeyCache.txt, 9104_KeyCache_ScrubTest_v2.txt, 9104_RecoveryManager.txt, 9104_RecoveryManager_v2.txt, 9104_ScrubTest.txt Variety of different test failures have cropped up over the past 2-3 weeks: h6. -org.apache.cassandra.cql3.UFTest FAILED (timeout)- // No longer failing / timing out h6. testLoadNewSSTablesAvoidsOverwrites(org.apache.cassandra.db.ColumnFamilyStoreTest): FAILED {noformat} 12 SSTables unexpectedly exist junit.framework.AssertionFailedError: 12 SSTables unexpectedly exist at org.apache.cassandra.db.ColumnFamilyStoreTest.testLoadNewSSTablesAvoidsOverwrites(ColumnFamilyStoreTest.java:1896) {noformat} h6. org.apache.cassandra.db.KeyCacheTest FAILED {noformat} expected:4 but was:2 junit.framework.AssertionFailedError: expected:4 but was:2 at org.apache.cassandra.db.KeyCacheTest.assertKeyCacheSize(KeyCacheTest.java:221) at org.apache.cassandra.db.KeyCacheTest.testKeyCache(KeyCacheTest.java:181) {noformat} h6. RecoveryManagerTest: {noformat} org.apache.cassandra.db.RecoveryManagerTest FAILED org.apache.cassandra.db.RecoveryManager2Test FAILED org.apache.cassandra.db.RecoveryManager3Test FAILED org.apache.cassandra.db.RecoveryManagerTruncateTest FAILED All are the following: java.nio.file.AccessDeniedException: build\test\cassandra\commitlog;0\CommitLog-5-1427995105229.log FSWriteError in build\test\cassandra\commitlog;0\CommitLog-5-1427995105229.log at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:128) at org.apache.cassandra.db.commitlog.CommitLogSegmentManager.recycleSegment(CommitLogSegmentManager.java:360) at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:156) at org.apache.cassandra.db.RecoveryManagerTest.testNothingToRecover(RecoveryManagerTest.java:75) Caused by: java.nio.file.AccessDeniedException: build\test\cassandra\commitlog;0\CommitLog-5-1427995105229.log at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at
cassandra git commit: Allow Cassandra config to be updated to restart Daemon without unloading classes
Repository: cassandra Updated Branches: refs/heads/trunk d19a6af66 - 4a3ca5c70 Allow Cassandra config to be updated to restart Daemon without unloading classes Patch by Emmanuel Hugonnet, reviewed by aweisberg for CASSANDRA-9046 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4a3ca5c7 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4a3ca5c7 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4a3ca5c7 Branch: refs/heads/trunk Commit: 4a3ca5c70e94d6b9329b7289f61ffbe708828ebf Parents: d19a6af Author: Brandon Williams brandonwilli...@apache.org Authored: Mon Apr 13 13:34:31 2015 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Mon Apr 13 13:34:31 2015 -0500 -- CHANGES.txt | 1 + build.xml| 2 +- src/java/org/apache/cassandra/config/DatabaseDescriptor.java | 2 +- 3 files changed, 3 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a3ca5c7/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index d6e8f57..9f89e3f 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0 + * Allow cassandra config to be updated to restart daemon without unloading classes (CASSANDRA-9046) * Don't initialize compaction writer before checking if iter is empty (CASSANDRA-9117) * Remove line number generation from default logback.xml * Don't execute any functions at prepare-time (CASSANDRA-9037) http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a3ca5c7/build.xml -- diff --git a/build.xml b/build.xml index 047f9a8..c2c60f3 100644 --- a/build.xml +++ b/build.xml @@ -326,7 +326,7 @@ dependency groupId=commons-codec artifactId=commons-codec version=1.2/ dependency groupId=org.apache.commons artifactId=commons-lang3 version=3.1/ dependency groupId=org.apache.commons artifactId=commons-math3 version=3.2/ - dependency groupId=com.googlecode.concurrentlinkedhashmap artifactId=concurrentlinkedhashmap-lru version=1.3/ + dependency groupId=com.googlecode.concurrentlinkedhashmap artifactId=concurrentlinkedhashmap-lru version=1.4/ dependency groupId=org.antlr artifactId=antlr version=3.5.2 exclusion groupId=org.antlr artifactId=stringtemplate/ /dependency http://git-wip-us.apache.org/repos/asf/cassandra/blob/4a3ca5c7/src/java/org/apache/cassandra/config/DatabaseDescriptor.java -- diff --git a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java index 781dcfa..fd1faeb 100644 --- a/src/java/org/apache/cassandra/config/DatabaseDescriptor.java +++ b/src/java/org/apache/cassandra/config/DatabaseDescriptor.java @@ -254,7 +254,7 @@ public class DatabaseDescriptor } } -private static void applyConfig(Config config) throws ConfigurationException +public static void applyConfig(Config config) throws ConfigurationException { conf = config;
[jira] [Assigned] (CASSANDRA-9172) Test cqlsh against degraded clusters
[ https://issues.apache.org/jira/browse/CASSANDRA-9172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Witschey reassigned CASSANDRA-9172: --- Assignee: Jim Witschey Test cqlsh against degraded clusters Key: CASSANDRA-9172 URL: https://issues.apache.org/jira/browse/CASSANDRA-9172 Project: Cassandra Issue Type: Test Components: Tests Reporter: Tyler Hobbs Assignee: Jim Witschey Priority: Minor Labels: retrospective_generated To prevent bugs like CASSANDRA-8512, cqlsh should be tested against degraded clusters (down nodes, schema disagreements) as part of the regular testing process. As suggested by Ariel, this probably makes the most sense as one component in the kitchen sink test harness. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-9182: --- Reproduced In: 2.1.3 Fix Version/s: 2.1.5 NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Fix For: 2.1.5 Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492968#comment-14492968 ] Philip Thompson commented on CASSANDRA-9182: Can you give some information on the schema of the keyspace/table that you used to insert the data ? NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Fix For: 2.1.5 Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492990#comment-14492990 ] Philip Thompson commented on CASSANDRA-9182: Am I correct in thinking that kk == million? NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Fix For: 2.1.5 Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9181) Improve index versus secondary index selection
Jeremy Hanna created CASSANDRA-9181: --- Summary: Improve index versus secondary index selection Key: CASSANDRA-9181 URL: https://issues.apache.org/jira/browse/CASSANDRA-9181 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremy Hanna There is a special case for secondary indexes if you always supply the partition key. For example, if you have a family with ID a456 which has 6 family members and I have a secondary index on first name. Currently, if I do a query like this select * from families where id = 'a456' and firstname = 'alowishus'; you can see from a query trace, that it will first scan the entire cluster based on the firstname, then look for the key within that. If it's not terribly invasive, I think this would be a valid use case to narrow down the results by key first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9180) Failed bootstrap/replace attempts persist entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-9180: Reproduced In: 2.0.0 Failed bootstrap/replace attempts persist entries in system.peers - Key: CASSANDRA-9180 URL: https://issues.apache.org/jira/browse/CASSANDRA-9180 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Brandon Williams Fix For: 2.0.15 In working on CASSANDRA-8336, I discovered vanilla C* has this problem. Just start a bootstrap or replace and kill it during the ring info gathering phase. System.peers, the gift that keeps on giving. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492934#comment-14492934 ] Piotr Kołaczkowski commented on CASSANDRA-8576: --- {noformat} pig.registerQuery(composite_rows = LOAD 'cql://cql3ks/compositekeytable? + defaultParameters + nativeParameters + where_clause=key1%20%3D%20%27key1%27%20and%20key2%20%3D%20111%20and%20column1%3D100page_size=2' USING CqlNativeStorage();); {noformat} Things like this make my eyes cry. I know, this already was like this, but why can't we just specify the query in a human readable form and call a function to url encode it? Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7776) Allow multiple MR jobs to concurrently write to the same column family from the same node using CqlBulkOutputFormat
[ https://issues.apache.org/jira/browse/CASSANDRA-7776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Pak updated CASSANDRA-7776: Fix Version/s: 2.1.1 Allow multiple MR jobs to concurrently write to the same column family from the same node using CqlBulkOutputFormat --- Key: CASSANDRA-7776 URL: https://issues.apache.org/jira/browse/CASSANDRA-7776 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Paul Pak Assignee: Paul Pak Priority: Minor Labels: cql3, hadoop Fix For: 2.1.1 Attachments: trunk-7776-v1.txt After sstable files are written, all files in the specified output directory are loaded (transferred) to the remote cassandra cluster. If multiple writes occur on a node to the same table (i.e. directory), then the multiple load processes end up transferring the same sstable files multiple times. Furthermore, if directory cleanup of successful outputs is set to occur ([CASSANDRA-|https://issues.apache.org/jira/browse/CASSANDRA-]), then there could be errors caused by write/load contention. This can be simply remedied by using unique output directories for each MR job. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9023) 2.0.13 write timeouts on driver
[ https://issues.apache.org/jira/browse/CASSANDRA-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492850#comment-14492850 ] Ariel Weisberg commented on CASSANDRA-9023: --- [~anishek] There are a bunch of test cases there. Is there a specific one I need to run that causes the issue to occur? 2.0.13 write timeouts on driver --- Key: CASSANDRA-9023 URL: https://issues.apache.org/jira/browse/CASSANDRA-9023 Project: Cassandra Issue Type: Bug Environment: For testing using only Single node hardware configuration as follows: cpu : CPU(s):16 On-line CPU(s) list: 0-15 Thread(s) per core:2 Core(s) per socket:8 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU MHz: 2000.174 L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-15 OS: Linux version 2.6.32-504.8.1.el6.x86_64 (mockbu...@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) Disk: There only single disk in Raid i think space is 500 GB used is 5 GB Reporter: anishek Assignee: Ariel Weisberg Fix For: 2.0.15 Attachments: out_system.log Initially asked @ http://www.mail-archive.com/user@cassandra.apache.org/msg41621.html Was suggested to post here. If any more details are required please let me know -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9046) Allow Cassandra config to be updated to restart Daemon without unloading classes
[ https://issues.apache.org/jira/browse/CASSANDRA-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492890#comment-14492890 ] Brandon Williams commented on CASSANDRA-9046: - Committed this, but leaving open due to jira misbehaving today. Allow Cassandra config to be updated to restart Daemon without unloading classes Key: CASSANDRA-9046 URL: https://issues.apache.org/jira/browse/CASSANDRA-9046 Project: Cassandra Issue Type: Improvement Components: Config Reporter: Emmanuel Hugonnet Fix For: 3.0 Attachments: 0001-CASSANDRA-9046-Allow-Cassandra-config-to-be-updated-.patch Make applyConfig public in DatabaseDescriptor so that if we embed C* we can restart it after some configuration change without having to stop the whole application to unload the class which is configured once and for all in a static block. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492929#comment-14492929 ] Piotr Kołaczkowski commented on CASSANDRA-8576: --- The whole {{AbstractColumnFamilyInputFormat#getToken}} thing - this is quite a complex piece of logic, and always invoked. Not sure if we really want to really merge it into 2.1.5. I'm afraid this may destabilize things. Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oded Peer updated CASSANDRA-7304: - Attachment: 7304-07.patch Right. Fixed in 7304-07.patch Ability to distinguish between NULL and UNSET values in Prepared Statements --- Key: CASSANDRA-7304 URL: https://issues.apache.org/jira/browse/CASSANDRA-7304 Project: Cassandra Issue Type: Sub-task Reporter: Drew Kutcharian Assignee: Oded Peer Labels: cql, protocolv4 Fix For: 3.0 Attachments: 7304-03.patch, 7304-04.patch, 7304-05.patch, 7304-06.patch, 7304-07.patch, 7304-2.patch, 7304.patch Currently Cassandra inserts tombstones when a value of a column is bound to NULL in a prepared statement. At higher insert rates managing all these tombstones becomes an unnecessary overhead. This limits the usefulness of the prepared statements since developers have to either create multiple prepared statements (each with a different combination of column names, which at times is just unfeasible because of the sheer number of possible combinations) or fall back to using regular (non-prepared) statements. This JIRA is here to explore the possibility of either: A. Have a flag on prepared statements that once set, tells Cassandra to ignore null columns or B. Have an UNSET value which makes Cassandra skip the null columns and not tombstone them Basically, in the context of a prepared statement, a null value means delete, but we don’t have anything that means ignore (besides creating a new prepared statement without the ignored column). Please refer to the original conversation on DataStax Java Driver mailing list for more background: https://groups.google.com/a/lists.datastax.com/d/topic/java-driver-user/cHE3OOSIXBU/discussion *EDIT 18/12/14 - [~odpeer] Implementation Notes:* The motivation hasn't changed. Protocol version 4 specifies that bind variables do not require having a value when executing a statement. Bind variables without a value are called 'unset'. The 'unset' bind variable is serialized as the int value '-2' without following bytes. \\ \\ * An unset bind variable in an EXECUTE or BATCH request ** On a {{value}} does not modify the value and does not create a tombstone ** On the {{ttl}} clause is treated as 'unlimited' ** On the {{timestamp}} clause is treated as 'now' ** On a map key or a list index throws {{InvalidRequestException}} ** On a {{counter}} increment or decrement operation does not change the counter value, e.g. {{UPDATE my_tab SET c = c - ? WHERE k = 1}} does change the value of counter {{c}} ** On a tuple field or UDT field throws {{InvalidRequestException}} * An unset bind variable in a QUERY request ** On a partition column, clustering column or index column in the {{WHERE}} clause throws {{InvalidRequestException}} ** On the {{limit}} clause is treated as 'unlimited' -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-6335) Hints broken for nodes that change broadcast address
[ https://issues.apache.org/jira/browse/CASSANDRA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire reassigned CASSANDRA-6335: --- Assignee: Shawn Kumar (was: Ryan McGuire) Hints broken for nodes that change broadcast address Key: CASSANDRA-6335 URL: https://issues.apache.org/jira/browse/CASSANDRA-6335 Project: Cassandra Issue Type: Bug Components: Core Reporter: Rick Branson Assignee: Shawn Kumar When a node changes it's broadcast address, the transition process works properly, but hints that are destined for it can't be delivered because of the address change. It produces an exception: java.lang.AssertionError: Missing host ID for 10.1.60.22 at org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:598) at org.apache.cassandra.service.StorageProxy$5.runMayThrow(StorageProxy.java:567) at org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1679) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9180) Failed bootstrap/replace attempts persist entries in system.peers
Brandon Williams created CASSANDRA-9180: --- Summary: Failed bootstrap/replace attempts persist entries in system.peers Key: CASSANDRA-9180 URL: https://issues.apache.org/jira/browse/CASSANDRA-9180 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Brandon Williams Fix For: 2.0.15 In working on CASSANDRA-8336, I discovered vanilla C* has this problem. Just start a bootstrap or replace and kill it during the ring info gathering phase. System.peers, the gift that keeps on giving. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8336) Quarantine nodes after receiving the gossip shutdown message
[ https://issues.apache.org/jira/browse/CASSANDRA-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8336: Attachment: 8366-v5.txt v5 creates a SILENT_SHUTDOWN_STATES list that stop() now checks, which is DEAD_STATES plus bootstrap and left. Since LEFT is in there, we don't need stopSilently anymore, which I rather like. Unfortunately the issue I previously mentioned about failed bootstraps is not the fault of this patch but a bug in 2.0 itself I happened to discover, which we can address in CASSANDRA-9180. Quarantine nodes after receiving the gossip shutdown message Key: CASSANDRA-8336 URL: https://issues.apache.org/jira/browse/CASSANDRA-8336 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Brandon Williams Fix For: 2.0.15 Attachments: 8336-v2.txt, 8336-v3.txt, 8336-v4.txt, 8336.txt, 8366-v5.txt In CASSANDRA-3936 we added a gossip shutdown announcement. The problem here is that this isn't sufficient; you can still get TOEs and have to wait on the FD to figure things out. This happens due to gossip propagation time and variance; if node X shuts down and sends the message to Y, but Z has a greater gossip version than Y for X and has not yet received the message, it can initiate gossip with Y and thus mark X alive again. I propose quarantining to solve this, however I feel it should be a -D parameter you have to specify, so as not to destroy current dev and test practices, since this will mean a node that shuts down will not be able to restart until the quarantine expires. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9180) Failed bootstrap/replace attempts persist entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-9180: Reviewer: Brandon Williams Assignee: Tyler Hobbs (was: Brandon Williams) Traced this back to CASSANDRA-6053, at least for bootstrapping. We don't look at the STATUS, so in the switch statement in SS.onChange when any non-STATUS state changes (which is guaranteed to happen) we persist the peer. Why replacing is also affected I'm not sure, since we check for dead state and bail out, but my guess is STATUS isn't the first state processed and the check doesn't know the state so it fails. Failed bootstrap/replace attempts persist entries in system.peers - Key: CASSANDRA-9180 URL: https://issues.apache.org/jira/browse/CASSANDRA-9180 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Tyler Hobbs Fix For: 2.0.15 In working on CASSANDRA-8336, I discovered vanilla C* has this problem. Just start a bootstrap or replace and kill it during the ring info gathering phase. System.peers, the gift that keeps on giving. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492977#comment-14492977 ] Andrey commented on CASSANDRA-9182: --- I don't use anything specific. Here is output from DESCRIBE KEYSPACE {code} CREATE KEYSPACE mykeyspalce WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true; CREATE TABLE mykeyspalce.table ( id uuid PRIMARY KEY, email text ) WITH bloom_filter_fp_chance = 0.01 AND caching = '{keys:ALL, rows_per_partition:NONE}' AND comment = '' AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32'} AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99.0PERCENTILE'; {code} Most of the tables follow mykeyspalce.table pattern. No indexes. NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Fix For: 2.1.5 Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492983#comment-14492983 ] Philip Thompson commented on CASSANDRA-9182: When you say fill it with data, about how much are you referring to? Do you mean literally fill the disks? NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Fix For: 2.1.5 Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492988#comment-14492988 ] Andrey commented on CASSANDRA-9182: --- Sorry, I'm not sure if it was related to issue, but tables data is the following: * 1 table with 17kk rows * 1 table with 1.5kk rows * around 10 tables with empty or 100 rows Disk usage around 50%. NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Fix For: 2.1.5 Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-6335) Hints broken for nodes that change broadcast address
[ https://issues.apache.org/jira/browse/CASSANDRA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Kumar updated CASSANDRA-6335: --- Tester: Shawn Kumar Hints broken for nodes that change broadcast address Key: CASSANDRA-6335 URL: https://issues.apache.org/jira/browse/CASSANDRA-6335 Project: Cassandra Issue Type: Bug Components: Core Reporter: Rick Branson Assignee: Ryan McGuire When a node changes it's broadcast address, the transition process works properly, but hints that are destined for it can't be delivered because of the address change. It produces an exception: java.lang.AssertionError: Missing host ID for 10.1.60.22 at org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:598) at org.apache.cassandra.service.StorageProxy$5.runMayThrow(StorageProxy.java:567) at org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1679) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492720#comment-14492720 ] Piotr Kołaczkowski commented on CASSANDRA-8576: --- AbstractColumnFamilyInputFormat#getToken: {noformat} if (keyValidator instanceof CompositeType) return partitioner.getToken(((CompositeType) keyValidator).build(keyValues)); /// should be CompositeType.build, because this is a static method else return partitioner.getToken(eqColumns.get(keys.get(0))); {noformat}} Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-9023) 2.0.13 write timeouts on driver
[ https://issues.apache.org/jira/browse/CASSANDRA-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492850#comment-14492850 ] Ariel Weisberg edited comment on CASSANDRA-9023 at 4/13/15 6:58 PM: [~anishek] There are a bunch of test cases there. Is there a specific one I need to run that causes the issue to occur? I'm also getting a lot of {noformat} Apr 13, 2015 2:57:37 PM com.google.common.util.concurrent.ExecutionList executeListener SEVERE: RuntimeException while executing runnable com.google.common.util.concurrent.Futures$5@344b6582 with executor com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService@69f84d90 java.lang.RuntimeException: something failed at com.anishek.threading.DefaultCallback.onFailure(DefaultCallback.java:23) at com.google.common.util.concurrent.Futures$5.run(Futures.java:1222) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) at com.google.common.util.concurrent.ExecutionList.executeListener(ExecutionList.java:156) at com.google.common.util.concurrent.ExecutionList.execute(ExecutionList.java:145) at com.google.common.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:91) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:384) at java.util.concurrent.FutureTask.setException(FutureTask.java:251) at java.util.concurrent.FutureTask.run(FutureTask.java:271) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException at com.anishek.ReadWriteRunnable.call(ReadWriteRunnable.java:52) at com.anishek.ReadWriteRunnable.call(ReadWriteRunnable.java:1) at java.util.concurrent.FutureTask.run(FutureTask.java:266) ... 3 more {noformat} was (Author: aweisberg): [~anishek] There are a bunch of test cases there. Is there a specific one I need to run that causes the issue to occur? 2.0.13 write timeouts on driver --- Key: CASSANDRA-9023 URL: https://issues.apache.org/jira/browse/CASSANDRA-9023 Project: Cassandra Issue Type: Bug Environment: For testing using only Single node hardware configuration as follows: cpu : CPU(s):16 On-line CPU(s) list: 0-15 Thread(s) per core:2 Core(s) per socket:8 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU MHz: 2000.174 L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-15 OS: Linux version 2.6.32-504.8.1.el6.x86_64 (mockbu...@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) Disk: There only single disk in Raid i think space is 500 GB used is 5 GB Reporter: anishek Assignee: Ariel Weisberg Fix For: 2.0.15 Attachments: out_system.log Initially asked @ http://www.mail-archive.com/user@cassandra.apache.org/msg41621.html Was suggested to post here. If any more details are required please let me know -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9180) Failed bootstrap/replace attempts persist entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-9180: Assignee: Sylvain Lebresne (was: Tyler Hobbs) Failed bootstrap/replace attempts persist entries in system.peers - Key: CASSANDRA-9180 URL: https://issues.apache.org/jira/browse/CASSANDRA-9180 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Sylvain Lebresne Fix For: 2.0.15 In working on CASSANDRA-8336, I discovered vanilla C* has this problem. Just start a bootstrap or replace and kill it during the ring info gathering phase. System.peers, the gift that keeps on giving. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9180) Failed bootstrap/replace attempts persist entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492950#comment-14492950 ] Brandon Williams commented on CASSANDRA-9180: - Actually, this originates from CASSANDRA-4351, which explains why it happens on 2.0.0. Failed bootstrap/replace attempts persist entries in system.peers - Key: CASSANDRA-9180 URL: https://issues.apache.org/jira/browse/CASSANDRA-9180 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Tyler Hobbs Fix For: 2.0.15 In working on CASSANDRA-8336, I discovered vanilla C* has this problem. Just start a bootstrap or replace and kill it during the ring info gathering phase. System.peers, the gift that keeps on giving. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey updated CASSANDRA-9182: -- Description: Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} was: Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at
[jira] [Updated] (CASSANDRA-9181) Improve index versus secondary index selection
[ https://issues.apache.org/jira/browse/CASSANDRA-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-9181: --- Fix Version/s: 3.0 Labels: 2i (was: ) Improve index versus secondary index selection -- Key: CASSANDRA-9181 URL: https://issues.apache.org/jira/browse/CASSANDRA-9181 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremy Hanna Labels: 2i Fix For: 3.0 There is a special case for secondary indexes if you always supply the partition key. For example, if you have a family with ID a456 which has 6 family members and I have a secondary index on first name. Currently, if I do a query like this select * from families where id = 'a456' and firstname = 'alowishus'; you can see from a query trace, that it will first scan the entire cluster based on the firstname, then look for the key within that. If it's not terribly invasive, I think this would be a valid use case to narrow down the results by key first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8252) dtests that involve topology changes should verify system.peers on all nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-8252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-8252: Fix Version/s: 2.0.15 dtests that involve topology changes should verify system.peers on all nodes Key: CASSANDRA-8252 URL: https://issues.apache.org/jira/browse/CASSANDRA-8252 Project: Cassandra Issue Type: Test Components: Tests Reporter: Brandon Williams Assignee: Shawn Kumar Fix For: 2.0.15, 2.1.5 This is especially true for replace where I've discovered it's wrong in 1.2.19, which is sad because now it's too late to fix. We've had a lot of problems with incorrect/null system.peers, so after any topology change we should verify it on every live node when everything is finished. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9182) NPE during startup
Andrey created CASSANDRA-9182: - Summary: NPE during startup Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9181) Improve index versus secondary index selection
[ https://issues.apache.org/jira/browse/CASSANDRA-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-9181: Reproduced In: 2.0.7 Improve index versus secondary index selection -- Key: CASSANDRA-9181 URL: https://issues.apache.org/jira/browse/CASSANDRA-9181 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremy Hanna Labels: 2i Fix For: 3.0 There is a special case for secondary indexes if you always supply the partition key. For example, if you have a family with ID a456 which has 6 family members and I have a secondary index on first name. Currently, if I do a query like this select * from families where id = 'a456' and firstname = 'alowishus'; you can see from a query trace, that it will first scan the entire cluster based on the firstname, then look for the key within that. If it's not terribly invasive, I think this would be a valid use case to narrow down the results by key first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493117#comment-14493117 ] Alex Liu commented on CASSANDRA-8576: - It's been this way for very beginning. Internally, url decoding is used. I think it's not an easy way around here. Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9180) Failed bootstrap/replace attempts persist entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-9180: Attachment: 9081.txt Simple fix to check if the node is a ring member before updating. We still need to bail on dead states first though due to replace. Failed bootstrap/replace attempts persist entries in system.peers - Key: CASSANDRA-9180 URL: https://issues.apache.org/jira/browse/CASSANDRA-9180 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Sylvain Lebresne Fix For: 2.0.15 Attachments: 9081.txt In working on CASSANDRA-8336, I discovered vanilla C* has this problem. Just start a bootstrap or replace and kill it during the ring info gathering phase. System.peers, the gift that keeps on giving. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9180) Failed bootstrap/replace attempts persist entries in system.peers
[ https://issues.apache.org/jira/browse/CASSANDRA-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-9180: Reviewer: Sylvain Lebresne (was: Brandon Williams) Assignee: Brandon Williams (was: Sylvain Lebresne) Failed bootstrap/replace attempts persist entries in system.peers - Key: CASSANDRA-9180 URL: https://issues.apache.org/jira/browse/CASSANDRA-9180 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Brandon Williams Fix For: 2.0.15 Attachments: 9081.txt In working on CASSANDRA-8336, I discovered vanilla C* has this problem. Just start a bootstrap or replace and kill it during the ring info gathering phase. System.peers, the gift that keeps on giving. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493123#comment-14493123 ] Alex Liu commented on CASSANDRA-8576: - ome one from Product Management should be able to answer it. Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493122#comment-14493122 ] Alex Liu commented on CASSANDRA-8576: - Some one from Product Management should be able to answer it. Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9183) Failure detector should detect and ignore local pauses
[ https://issues.apache.org/jira/browse/CASSANDRA-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-9183: Issue Type: Improvement (was: Bug) Failure detector should detect and ignore local pauses -- Key: CASSANDRA-9183 URL: https://issues.apache.org/jira/browse/CASSANDRA-9183 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Brandon Williams Assignee: Brandon Williams Fix For: 3.0 A local node can be paused for many reasons such as GC, and if the pause is long enough when it recovers it will think all the other nodes are dead until it gossips, causing UAE to be thrown to clients trying to use it as a coordinator. Instead, the FD can track the current time, and if the gap there becomes too large, skip marking the nodes down (reset the FD data perhaps) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9183) Failure detector should detect and ignore local pauses
Brandon Williams created CASSANDRA-9183: --- Summary: Failure detector should detect and ignore local pauses Key: CASSANDRA-9183 URL: https://issues.apache.org/jira/browse/CASSANDRA-9183 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Brandon Williams A local node can be paused for many reasons such as GC, and if the pause is long enough when it recovers it will think all the other nodes are dead until it gossips, causing UAE to be thrown to clients trying to use it as a coordinator. Instead, the FD can track the current time, and if the gap there becomes too large, skip marking the nodes down (reset the FD data perhaps) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9183) Failure detector should detect and ignore local pauses
[ https://issues.apache.org/jira/browse/CASSANDRA-9183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-9183: Fix Version/s: 3.0 Failure detector should detect and ignore local pauses -- Key: CASSANDRA-9183 URL: https://issues.apache.org/jira/browse/CASSANDRA-9183 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Brandon Williams Fix For: 3.0 A local node can be paused for many reasons such as GC, and if the pause is long enough when it recovers it will think all the other nodes are dead until it gossips, causing UAE to be thrown to clients trying to use it as a coordinator. Instead, the FD can track the current time, and if the gap there becomes too large, skip marking the nodes down (reset the FD data perhaps) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8584) Add strerror output on failed trySkipCache calls
[ https://issues.apache.org/jira/browse/CASSANDRA-8584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493197#comment-14493197 ] Ariel Weisberg commented on CASSANDRA-8584: --- Created a branch for this using NoSpamLogger https://github.com/apache/cassandra/compare/trunk...aweisberg:C-8584?expand=1 Add strerror output on failed trySkipCache calls Key: CASSANDRA-8584 URL: https://issues.apache.org/jira/browse/CASSANDRA-8584 Project: Cassandra Issue Type: Improvement Reporter: Joshua McKenzie Assignee: Ariel Weisberg Priority: Trivial Fix For: 2.1.5 Attachments: 8584_v1.txt, NoSpamLogger.java, nospamlogger.txt Since trySkipCache returns an errno rather than -1 and setting errno like our other CLibrary calls, it's thread-safe and we could print out more helpful information if we failed to prompt the kernel to skip the page cache. That system call should always succeed unless we have an invalid fd as it's free to ignore us. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9182) NPE during startup
[ https://issues.apache.org/jira/browse/CASSANDRA-9182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493175#comment-14493175 ] Andrey commented on CASSANDRA-9182: --- Right. Sorry for not being explicit NPE during startup -- Key: CASSANDRA-9182 URL: https://issues.apache.org/jira/browse/CASSANDRA-9182 Project: Cassandra Issue Type: Bug Components: Core Reporter: Andrey Fix For: 2.1.5 Environment: * cassandra 2.1.3 Got NPE during startup. Here is steps to reproduce (however not sure if that will be enough): * start single node cluster. fill it with data (replication factor 1) * start second node. * in second node's logs: {code} ERROR [Thread-3] 2015-04-13 07:22:58,558 CassandraDaemon.java - Exception in thread Thread[Thread-3,5,main] java.lang.NullPointerException: null at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:165) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.db.PagedRangeCommand$Serializer.deserialize(PagedRangeCommand.java:124) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.MessageIn.read(MessageIn.java:99) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessage(IncomingTcpConnection.java:168) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.receiveMessages(IncomingTcpConnection.java:150) ~[apache-cassandra-2.1.3.jar:2.1.3] at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:82) ~[apache-cassandra-2.1.3.jar:2.1.3] INFO [GossipStage:1] 2015-04-13 07:23:00,149 Gossiper.java - Node /172.30.0.86 is now part of the cluster ERROR [MigrationStage:1] 2015-04-13 07:23:00,176 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. ERROR [MigrationStage:1] 2015-04-13 07:23:00,178 MigrationTask.java - Can't send migration request: node /172.30.0.86 is down. INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,184 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,347 Gossiper.java - InetAddress /172.30.0.86 is now UP INFO [HANDSHAKE-/172.30.0.86] 2015-04-13 07:23:00,351 OutboundTcpConnection.java - Handshaking version with /172.30.0.86 INFO [SharedPool-Worker-1] 2015-04-13 07:23:00,509 Gossiper.java - InetAddress /172.30.0.86 is now UP {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9181) Improve index versus secondary index selection
[ https://issues.apache.org/jira/browse/CASSANDRA-9181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493102#comment-14493102 ] Jeremy Hanna commented on CASSANDRA-9181: - This appears to be a regression as it should already use the partition key first. To give some detail on how we observed the behavior, we were using interactive query tracing in DevCenter which is hardcoded to use CL.ONE right now. The query trace with only the partition key shows the expected behavior of contacting a replica and fulfilling the query. The query trace with the partition key and the secondary index contacts multiple hosts for coverage. I'll see if I can get the output for inclusion on the ticket. Improve index versus secondary index selection -- Key: CASSANDRA-9181 URL: https://issues.apache.org/jira/browse/CASSANDRA-9181 Project: Cassandra Issue Type: Bug Components: Core Reporter: Jeremy Hanna Labels: 2i Fix For: 3.0 There is a special case for secondary indexes if you always supply the partition key. For example, if you have a family with ID a456 which has 6 family members and I have a secondary index on first name. Currently, if I do a query like this select * from families where id = 'a456' and firstname = 'alowishus'; you can see from a query trace, that it will first scan the entire cluster based on the firstname, then look for the key within that. If it's not terribly invasive, I think this would be a valid use case to narrow down the results by key first. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9148) Issue when modifying UDT
[ https://issues.apache.org/jira/browse/CASSANDRA-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-9148: -- Attachment: 9148.txt Issue when modifying UDT Key: CASSANDRA-9148 URL: https://issues.apache.org/jira/browse/CASSANDRA-9148 Project: Cassandra Issue Type: Bug Components: Core Reporter: Oskar Kjellin Assignee: Jeff Jirsa Fix For: 2.1.5 Attachments: 9148.txt I'm trying out the user defined types but ran into some issues when adding a column to an existing type. Unfortunately I had to scrap the entire cluster so I cannot access it any more. After creating the UDT i adde two tables using it. 1 was just using frozentype. The other was using both frozentype frozen mapString, type. Then I realized I needed to add a new field to the user type. Then when I tried to put to any of the two tables (setting all fields to the UDT in the datastax java driver) I got this error message that I could not find anywhere else but in the cassandra code: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid remaining data after end of UDT value I had to scrap my keyspace in order to be able to use it again. Could not even drop one of the tables. I know that they are frozen so we cannot modify the value of individual fields once they are written but we must be able to modify the schema right? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-9148) Issue when modifying UDT
[ https://issues.apache.org/jira/browse/CASSANDRA-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14491978#comment-14491978 ] Jeff Jirsa edited comment on CASSANDRA-9148 at 4/13/15 11:01 PM: - Apologies for [~blerer], didn't notice he was already assigned. Perhaps he can review. https://github.com/jeffjirsa/cassandra/commit/3802b59b5df33d546832095df010287f3cebe0f5.diff Includes unit test. The assert in db/composites/CellNames.java is to throw an AssertionError rather than an NPE if there's another code path that somehow hits this same issue. was (Author: jjirsa): Apologies for [~blerer], didn't notice he was already assigned. Perhaps he can review. Adding patch. Should apply to both 2.1 and trunk. Also on github against trunk: https://github.com/jeffjirsa/cassandra/commit/2853aa4e01dd91f15b47a829bb53c499c729c5d8.diff Or against 2.1 (identical): https://github.com/jeffjirsa/cassandra/commit/3375ac6fc1dc15f414e8d594f854dee2676711fd.diff Includes unit test. The assert in db/composites/CellNames.java is to throw an AssertionError rather than an NPE if there's another code path that somehow hits this same issue. Issue when modifying UDT Key: CASSANDRA-9148 URL: https://issues.apache.org/jira/browse/CASSANDRA-9148 Project: Cassandra Issue Type: Bug Components: Core Reporter: Oskar Kjellin Assignee: Jeff Jirsa Fix For: 2.1.5 Attachments: 9148.txt I'm trying out the user defined types but ran into some issues when adding a column to an existing type. Unfortunately I had to scrap the entire cluster so I cannot access it any more. After creating the UDT i adde two tables using it. 1 was just using frozentype. The other was using both frozentype frozen mapString, type. Then I realized I needed to add a new field to the user type. Then when I tried to put to any of the two tables (setting all fields to the UDT in the datastax java driver) I got this error message that I could not find anywhere else but in the cassandra code: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid remaining data after end of UDT value I had to scrap my keyspace in order to be able to use it again. Could not even drop one of the tables. I know that they are frozen so we cannot modify the value of individual fields once they are written but we must be able to modify the schema right? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8336) Quarantine nodes after receiving the gossip shutdown message
[ https://issues.apache.org/jira/browse/CASSANDRA-8336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493345#comment-14493345 ] Richard Low commented on CASSANDRA-8336: +1, thanks Brandon! Quarantine nodes after receiving the gossip shutdown message Key: CASSANDRA-8336 URL: https://issues.apache.org/jira/browse/CASSANDRA-8336 Project: Cassandra Issue Type: Bug Components: Core Reporter: Brandon Williams Assignee: Brandon Williams Fix For: 2.0.15 Attachments: 8336-v2.txt, 8336-v3.txt, 8336-v4.txt, 8336.txt, 8366-v5.txt In CASSANDRA-3936 we added a gossip shutdown announcement. The problem here is that this isn't sufficient; you can still get TOEs and have to wait on the FD to figure things out. This happens due to gossip propagation time and variance; if node X shuts down and sends the message to Y, but Z has a greater gossip version than Y for X and has not yet received the message, it can initiate gossip with Y and thus mark X alive again. I propose quarantining to solve this, however I feel it should be a -D parameter you have to specify, so as not to destroy current dev and test practices, since this will mean a node that shuts down will not be able to restart until the quarantine expires. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9148) Issue when modifying UDT
[ https://issues.apache.org/jira/browse/CASSANDRA-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-9148: -- Attachment: (was: 9148-2.1.txt) Issue when modifying UDT Key: CASSANDRA-9148 URL: https://issues.apache.org/jira/browse/CASSANDRA-9148 Project: Cassandra Issue Type: Bug Components: Core Reporter: Oskar Kjellin Assignee: Jeff Jirsa Fix For: 2.1.5 Attachments: 9148.txt I'm trying out the user defined types but ran into some issues when adding a column to an existing type. Unfortunately I had to scrap the entire cluster so I cannot access it any more. After creating the UDT i adde two tables using it. 1 was just using frozentype. The other was using both frozentype frozen mapString, type. Then I realized I needed to add a new field to the user type. Then when I tried to put to any of the two tables (setting all fields to the UDT in the datastax java driver) I got this error message that I could not find anywhere else but in the cassandra code: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid remaining data after end of UDT value I had to scrap my keyspace in order to be able to use it again. Could not even drop one of the tables. I know that they are frozen so we cannot modify the value of individual fields once they are written but we must be able to modify the schema right? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8717) Top-k queries with custom secondary indexes
[ https://issues.apache.org/jira/browse/CASSANDRA-8717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493297#comment-14493297 ] Aleksey Yeschenko commented on CASSANDRA-8717: -- I'll review shortly (we have a conference this week, so expect early next week most likely). In the meantime, can you format the patch to match the project's code style - https://wiki.apache.org/cassandra/CodeStyle ? Thanks Top-k queries with custom secondary indexes --- Key: CASSANDRA-8717 URL: https://issues.apache.org/jira/browse/CASSANDRA-8717 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Andrés de la Peña Assignee: Andrés de la Peña Priority: Minor Labels: 2i, secondary_index, sort, sorting, top-k Fix For: 3.0 Attachments: 0001-Add-support-for-top-k-queries-in-2i.patch, 0002-Add-support-for-top-k-queries-in-2i.patch As presented in [Cassandra Summit Europe 2014|https://www.youtube.com/watch?v=Hg5s-hXy_-M], secondary indexes can be modified to support general top-k queries with minimum changes in Cassandra codebase. This way, custom 2i implementations could provide relevance search, sorting by columns, etc. Top-k queries retrieve the k best results for a certain query. That implies querying the k best rows in each token range and then sort them in order to obtain the k globally best rows. For doing that, we propose two additional methods in class SecondaryIndexSearcher: {code:java} public boolean requiresFullScan(ListIndexExpression clause) { return false; } public ListRow sort(ListIndexExpression clause, ListRow rows) { return rows; } {code} The first one indicates if a query performed in the index requires querying all the nodes in the ring. It is necessary in top-k queries because we do not know which node are the best results. The second method specifies how to sort all the partial node results according to the query. Then we add two similar methods to the class AbstractRangeCommand: {code:java} this.searcher = Keyspace.open(keyspace).getColumnFamilyStore(columnFamily).indexManager.searcher(rowFilter); public boolean requiresFullScan() { return searcher == null ? false : searcher.requiresFullScan(rowFilter); } public ListRow combine(ListRow rows) { return searcher == null ? trim(rows) : trim(searcher.sort(rowFilter, rows)); } {code} Finnally, we modify StorageProxy#getRangeSlice to use the previous method, as shown in the attached patch. We think that the proposed approach provides very useful functionality with minimum impact in current codebase. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9184) sstable.CorruptSSTableException
[ https://issues.apache.org/jira/browse/CASSANDRA-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Relish Chackochan updated CASSANDRA-9184: - Reviewer: Jonathan Ellis sstable.CorruptSSTableException --- Key: CASSANDRA-9184 URL: https://issues.apache.org/jira/browse/CASSANDRA-9184 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 1.2.16 on RHEL 6.5 Reporter: Relish Chackochan We have 8 node Cassandra cluster with 1.2.16 version ( RHEL 6.5 64-bit ) on Vmware ESXi server and having SSTable Corrupt Error facing frequently on multiple columfamily. Using nodetol scrub i am able to resolve the issue. i would like to know why this is happening frequently. is this related to any configuration parameters or VMware related issue. Can someone help on this. org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: dataSize of 3691036590893839668 starting at 362204813 would be larger than file /opt/lib /cassandra/data/X/XX/-X-ic-1144-Data.db length 486205378 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8558) deleted row still can be selected out
[ https://issues.apache.org/jira/browse/CASSANDRA-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493572#comment-14493572 ] Itay Adler commented on CASSANDRA-8558: --- I still experience this issue in version 2.0.14. To make sure I experience the same issue here's what happens to me: I have a column family events, with a primary key user_id and clustering_key created_at (with ordering by created_at). I deleted rows for some primary_key, and when I query with the primary_key I get no results back, but when I query it with the primary key and a range on the created_at, I still get back the data. deleted row still can be selected out - Key: CASSANDRA-8558 URL: https://issues.apache.org/jira/browse/CASSANDRA-8558 Project: Cassandra Issue Type: Bug Components: Core Environment: 2.1.2 java version 1.7.0_55 Reporter: zhaoyan Assignee: Sylvain Lebresne Priority: Blocker Labels: qa-resolved Fix For: 2.0.12, 2.1.3 Attachments: 8558-v2_2.0.txt, 8558-v2_2.1.txt, 8558.txt first {code}CREATE KEYSPACE space1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3}; CREATE TABLE space1.table3(a int, b int, c text,primary key(a,b)); CREATE KEYSPACE space2 WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 3};{code} second {code}CREATE TABLE space2.table1(a int, b int, c int, primary key(a,b)); CREATE TABLE space2.table2(a int, b int, c int, primary key(a,b)); INSERT INTO space1.table3(a,b,c) VALUES(1,1,'1'); drop table space2.table1; DELETE FROM space1.table3 where a=1 and b=1; drop table space2.table2; select * from space1.table3 where a=1 and b=1;{code} you will find that the row (a=1 and b=1) in space1.table3 is not deleted. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9184) sstable.CorruptSSTableException
[ https://issues.apache.org/jira/browse/CASSANDRA-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Relish Chackochan updated CASSANDRA-9184: - Environment: Cassandra 1.2.16 on RHEL 6.5 (was: Cassandra 1.2.16 on RHEL 6.5 64-bit) sstable.CorruptSSTableException --- Key: CASSANDRA-9184 URL: https://issues.apache.org/jira/browse/CASSANDRA-9184 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 1.2.16 on RHEL 6.5 Reporter: Relish Chackochan We have 8 node Cassandra cluster with 1.2.16 version ( RHEL 6.5 64-bit ) on Vmware ESXi server and having SSTable Corrupt Error facing frequently on multiple columfamily. Using nodetol scrub i am able to resolve the issue. i would like to know why this is happening frequently. is this related to any configuration parameters or VMware related issue. Can someone help on this. org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: dataSize of 3691036590893839668 starting at 362204813 would be larger than file /opt/lib /cassandra/data/X/XX/-X-ic-1144-Data.db length 486205378 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9184) sstable.CorruptSSTableException
[ https://issues.apache.org/jira/browse/CASSANDRA-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Relish Chackochan updated CASSANDRA-9184: - Environment: Cassandra 1.2.16 on RHEL 6.5 64-bit (was: RHEL 6.5 64-bit on VMWare ESXi) sstable.CorruptSSTableException --- Key: CASSANDRA-9184 URL: https://issues.apache.org/jira/browse/CASSANDRA-9184 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 1.2.16 on RHEL 6.5 64-bit Reporter: Relish Chackochan We have 8 node Cassandra cluster with 1.2.16 version ( RHEL 6.5 64-bit ) on Vmware ESXi server and having SSTable Corrupt Error facing frequently on multiple columfamily. Using nodetol scrub i am able to resolve the issue. i would like to know why this is happening frequently. is this related to any configuration parameters or VMware related issue. Can someone help on this. org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: dataSize of 3691036590893839668 starting at 362204813 would be larger than file /opt/lib /cassandra/data/X/XX/-X-ic-1144-Data.db length 486205378 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9184) sstable.CorruptSSTableException
Relish Chackochan created CASSANDRA-9184: Summary: sstable.CorruptSSTableException Key: CASSANDRA-9184 URL: https://issues.apache.org/jira/browse/CASSANDRA-9184 Project: Cassandra Issue Type: Bug Components: Core Environment: RHEL 6.5 64-bit on VMWare ESXi Reporter: Relish Chackochan We have 8 node Cassandra cluster with 1.2.16 version ( RHEL 6.5 64-bit ) on Vmware ESXi server and having SSTable Corrupt Error facing frequently on multiple columfamily. Using nodetol scrub i am able to resolve the issue. i would like to know why this is happening frequently. is this related to any configuration parameters or VMware related issue. Can someone help on this. org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: dataSize of 3691036590893839668 starting at 362204813 would be larger than file /opt/lib /cassandra/data/X/XX/-X-ic-1144-Data.db length 486205378 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9023) 2.0.13 write timeouts on driver
[ https://issues.apache.org/jira/browse/CASSANDRA-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14493511#comment-14493511 ] anishek commented on CASSANDRA-9023: I had tested with CassandraTables#insertWithoutTTL. You are probably testing CassandraTables#readWriteOperationsWithUpdatesHavingAPercentage, this test needs a pre requisite where in the insert test should be run manually (we wanted to change partition keys/ entries per partition etc so we did not explicitly have a dependency between them) to prepare cassandra with some data and simulate run time use case of update/insert with some percentage(here 70%) or operations being update. Since you would have run the test in isolation you are getting the null pointer as there is no data in db. 2.0.13 write timeouts on driver --- Key: CASSANDRA-9023 URL: https://issues.apache.org/jira/browse/CASSANDRA-9023 Project: Cassandra Issue Type: Bug Environment: For testing using only Single node hardware configuration as follows: cpu : CPU(s):16 On-line CPU(s) list: 0-15 Thread(s) per core:2 Core(s) per socket:8 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU MHz: 2000.174 L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-15 OS: Linux version 2.6.32-504.8.1.el6.x86_64 (mockbu...@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) Disk: There only single disk in Raid i think space is 500 GB used is 5 GB Reporter: anishek Assignee: Ariel Weisberg Fix For: 2.0.15 Attachments: out_system.log Initially asked @ http://www.mail-archive.com/user@cassandra.apache.org/msg41621.html Was suggested to post here. If any more details are required please let me know -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9184) sstable.CorruptSSTableException
[ https://issues.apache.org/jira/browse/CASSANDRA-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Relish Chackochan updated CASSANDRA-9184: - Environment: Apache Cassandra 1.2.16 on RHEL 6.5 (was: Cassandra 1.2.16 on RHEL 6.5) sstable.CorruptSSTableException --- Key: CASSANDRA-9184 URL: https://issues.apache.org/jira/browse/CASSANDRA-9184 Project: Cassandra Issue Type: Bug Components: Core Environment: Apache Cassandra 1.2.16 on RHEL 6.5 Reporter: Relish Chackochan We have 8 node Cassandra cluster with 1.2.16 version ( RHEL 6.5 64-bit ) on Vmware ESXi server and having SSTable Corrupt Error facing frequently on multiple columfamily. Using nodetol scrub i am able to resolve the issue. i would like to know why this is happening frequently. is this related to any configuration parameters or VMware related issue. Can someone help on this. org.apache.cassandra.io.sstable.CorruptSSTableException: java.io.IOException: dataSize of 3691036590893839668 starting at 362204813 would be larger than file /opt/lib /cassandra/data/X/XX/-X-ic-1144-Data.db length 486205378 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9046) Allow Cassandra config to be updated to restart Daemon without unloading classes
[ https://issues.apache.org/jira/browse/CASSANDRA-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492800#comment-14492800 ] Ariel Weisberg commented on CASSANDRA-9046: --- I am +1, will try and find a committer. Allow Cassandra config to be updated to restart Daemon without unloading classes Key: CASSANDRA-9046 URL: https://issues.apache.org/jira/browse/CASSANDRA-9046 Project: Cassandra Issue Type: Improvement Components: Config Reporter: Emmanuel Hugonnet Fix For: 3.0 Attachments: 0001-CASSANDRA-9046-Allow-Cassandra-config-to-be-updated-.patch Make applyConfig public in DatabaseDescriptor so that if we embed C* we can restart it after some configuration change without having to stop the whole application to unload the class which is configured once and for all in a static block. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9046) Allow Cassandra config to be updated to restart Daemon without unloading classes
[ https://issues.apache.org/jira/browse/CASSANDRA-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Hugonnet updated CASSANDRA-9046: - Attachment: 0001-CASSANDRA-9046-Allow-Cassandra-config-to-be-updated-.patch Fixing the whitespace issue Allow Cassandra config to be updated to restart Daemon without unloading classes Key: CASSANDRA-9046 URL: https://issues.apache.org/jira/browse/CASSANDRA-9046 Project: Cassandra Issue Type: Improvement Components: Config Reporter: Emmanuel Hugonnet Fix For: 3.0 Attachments: 0001-CASSANDRA-9046-Allow-Cassandra-config-to-be-updated-.patch Make applyConfig public in DatabaseDescriptor so that if we embed C* we can restart it after some configuration change without having to stop the whole application to unload the class which is configured once and for all in a static block. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9046) Allow Cassandra config to be updated to restart Daemon without unloading classes
[ https://issues.apache.org/jira/browse/CASSANDRA-9046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Emmanuel Hugonnet updated CASSANDRA-9046: - Attachment: (was: 0001-CASSANDRA-9046-Making-applyConfig-public-so-it-may-b.patch) Allow Cassandra config to be updated to restart Daemon without unloading classes Key: CASSANDRA-9046 URL: https://issues.apache.org/jira/browse/CASSANDRA-9046 Project: Cassandra Issue Type: Improvement Components: Config Reporter: Emmanuel Hugonnet Fix For: 3.0 Attachments: 0001-CASSANDRA-9046-Allow-Cassandra-config-to-be-updated-.patch Make applyConfig public in DatabaseDescriptor so that if we embed C* we can restart it after some configuration change without having to stop the whole application to unload the class which is configured once and for all in a static block. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8940) Inconsistent select count and select distinct
[ https://issues.apache.org/jira/browse/CASSANDRA-8940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492700#comment-14492700 ] Frens Jan Rumph commented on CASSANDRA-8940: [~blerer], sorry for the delay ... been a bit busy past few weeks. I've whipped up a script which should reproduce my problems: {code} import cassandra.cluster import cassandra.concurrent import string import sys def setup_schema(session): print(setting up schema) session.execute(CREATE KEYSPACE IF NOT EXISTS count_test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1};) session.set_keyspace(count_test) session.execute( CREATE TABLE IF NOT EXISTS tbl ( id text, bucket bigint, offset int, value double, PRIMARY KEY ((id, bucket), offset) ) ) def insert_test_data(session): # setup parameters for the inserts ids = string.lowercase[:5] bucket_count = 10 offset_count = 1000 print('inserting data for %s ids, %s buckets and %s offsets' % (len(ids), bucket_count, offset_count)) # clear the table session.execute(TRUNCATE tbl;) # prepare the insert insert = session.prepare(INSERT INTO tbl (id, bucket, offset, value) VALUES (?, ?, ?, ?)) # insert a CQL row for each tag, bucket and offset inserts = [ (insert, (t, b, o, 0)) for t in ids for b in xrange(bucket_count) for o in xrange(offset_count) ] _ = cassandra.concurrent.execute_concurrent(session, inserts) return len(inserts) if __name__ == '__main__': contact_points = ['cas-1', 'cas-2', 'cas-3'] session = cassandra.cluster.Cluster(contact_points).connect() try: setup_schema(session) inserted = insert_test_data(session) print(inserted %s rows % inserted) for count in (session.execute(SELECT count(*) FROM tbl) for _ in range(10)): print('queried count was %s%s' % (count[0].count, '' if count[0].count == inserted else ' (fail)')) finally: session.shutdown() {code} In my setup this yields (on a particular run): {code} setting up schema inserting data for 5 ids, 10 buckets and 1000 offsets inserted 5 rows queried count was 5 queried count was 49396 (fail) queried count was 49918 (fail) queried count was 5 queried count was 5 queried count was 5 queried count was 49993 (fail) queried count was 48997 (fail) queried count was 49772 (fail) queried count was 49551 (fail) {code} As you can see the counts vary. The number of failures seem to be correlated to the number of rows in the cluster. E.g. with only 1000 rows there are no wrong counts. As for my set-up: I'm using a three node cluster (cas-1, cas-2 and cas-3) which run on Vagrant + LXC. I planned on writing a script using CCM to be portable, but I wasn't able to reproduce the results with CCM! I've tried both Cassandra 2.1.2 and 2.1.4 with CCM. That was rather disappointing. Or looking at it differently ... it might be considered a clue to where things go wrong ... Any of this ring a bell? Do you perhaps have pointers for me to dig deeper? Inconsistent select count and select distinct - Key: CASSANDRA-8940 URL: https://issues.apache.org/jira/browse/CASSANDRA-8940 Project: Cassandra Issue Type: Bug Components: Core Environment: 2.1.2 Reporter: Frens Jan Rumph Assignee: Benjamin Lerer When performing {{select count( * ) from ...}} I expect the results to be consistent over multiple query executions if the table at hand is not written to / deleted from in the mean time. However, in my set-up it is not. The counts returned vary considerable (several percent). The same holds for {{select distinct partition-key-columns from ...}}. I have a table in a keyspace with replication_factor = 1 which is something like: {code} CREATE TABLE tbl ( id frozenid_type, bucket bigint, offset int, value double, PRIMARY KEY ((id, bucket), offset) ) {code} The frozen udt is: {code} CREATE TYPE id_type ( tags maptext, text ); {code} The table contains around 35k rows (I'm not trying to be funny here ...). The consistency level for the queries was ONE. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492709#comment-14492709 ] Piotr Kołaczkowski edited comment on CASSANDRA-8576 at 4/13/15 5:37 PM: {noformat} @@ -79,6 +90,7 @@ public class ColumnFamilySplit extends InputSplit implements Writable, org.apach { out.writeUTF(startToken); out.writeUTF(endToken); +out.writeBoolean(partitionKeyEqQuery); out.writeInt(dataNodes.length); {noformat} This is going to break mixed-version clusters. Hadoop tasks will error out in weird ways on a cluster with some nodes 2.1.4 and some 2.1.5. This is actually very unfortunate that split serialization doesn't write a length or version header first, so we could detect it properly on the clients. Are you sure we want to merge this feature in the middle of 2.1.x? was (Author: pkolaczk): {noformat} @@ -79,6 +90,7 @@ public class ColumnFamilySplit extends InputSplit implements Writable, org.apach { out.writeUTF(startToken); out.writeUTF(endToken); +out.writeBoolean(partitionKeyEqQuery); out.writeInt(dataNodes.length); {noformat} This is going to break mixed-version clusters. Hadoop tasks will error out in weird ways on a cluster with some nodes 2.1.4 and some 2.1.5. This is actually very unfortunate that split serialization doesn't write a length or version header first, so we could detect it properly on the clients. Are you sure we want to merge this feature in the middle of 2.1.x? Are we Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8576) Primary Key Pushdown For Hadoop
[ https://issues.apache.org/jira/browse/CASSANDRA-8576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492709#comment-14492709 ] Piotr Kołaczkowski commented on CASSANDRA-8576: --- {noformat} @@ -79,6 +90,7 @@ public class ColumnFamilySplit extends InputSplit implements Writable, org.apach { out.writeUTF(startToken); out.writeUTF(endToken); +out.writeBoolean(partitionKeyEqQuery); out.writeInt(dataNodes.length); {noformat} This is going to break mixed-version clusters. Hadoop tasks will error out in weird ways on a cluster with some nodes 2.1.4 and some 2.1.5. This is actually very unfortunate that split serialization doesn't write a length or version header first, so we could detect it properly on the clients. Are you sure we want to merge this feature in the middle of 2.1.x? Are we Primary Key Pushdown For Hadoop --- Key: CASSANDRA-8576 URL: https://issues.apache.org/jira/browse/CASSANDRA-8576 Project: Cassandra Issue Type: Improvement Components: Hadoop Reporter: Russell Alexander Spitzer Assignee: Alex Liu Fix For: 2.1.5 Attachments: 8576-2.1-branch.txt, 8576-trunk.txt I've heard reports from several users that they would like to have predicate pushdown functionality for hadoop (Hive in particular) based services. Example usecase Table with wide partitions, one per customer Application team has HQL they would like to run on a single customer Currently time to complete scales with number of customers since Input Format can't pushdown primary key predicate Current implementation requires a full table scan (since it can't recognize that a single partition was specified) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9148) Issue when modifying UDT
[ https://issues.apache.org/jira/browse/CASSANDRA-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-9148: -- Attachment: 9148-2.1.txt Apologies for [~blerer], didn't notice he was already assigned. Perhaps he can review. Adding patch. Should apply to both 2.1 and trunk. Also on github against trunk: https://github.com/jeffjirsa/cassandra/commit/2853aa4e01dd91f15b47a829bb53c499c729c5d8.diff Or against 2.1 (identical): https://github.com/jeffjirsa/cassandra/commit/3375ac6fc1dc15f414e8d594f854dee2676711fd.diff Includes unit test. The assert in db/composites/CellNames.java is to throw an AssertionError rather than an NPE if there's another code path that somehow hits this same issue. Issue when modifying UDT Key: CASSANDRA-9148 URL: https://issues.apache.org/jira/browse/CASSANDRA-9148 Project: Cassandra Issue Type: Bug Components: Core Reporter: Oskar Kjellin Assignee: Benjamin Lerer Fix For: 2.1.5 Attachments: 9148-2.1.txt I'm trying out the user defined types but ran into some issues when adding a column to an existing type. Unfortunately I had to scrap the entire cluster so I cannot access it any more. After creating the UDT i adde two tables using it. 1 was just using frozentype. The other was using both frozentype frozen mapString, type. Then I realized I needed to add a new field to the user type. Then when I tried to put to any of the two tables (setting all fields to the UDT in the datastax java driver) I got this error message that I could not find anywhere else but in the cassandra code: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid remaining data after end of UDT value I had to scrap my keyspace in order to be able to use it again. Could not even drop one of the tables. I know that they are frozen so we cannot modify the value of individual fields once they are written but we must be able to modify the schema right? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oded Peer updated CASSANDRA-7304: - Attachment: 7304-06.patch Added a new patch 7304-06.patch {quote} There are a few use cases that are not tested and that seems to not work properly: List marker: SELECT * FROM %s WHERE k in ? or SELECT * FROM %s WHERE k = ? AND i IN ? {quote} Fixed. bq. Tuple marker: SELECT * FROM %s WHERE k = ? and (i, j) = ? Fixed. bq. Collection marker containing an unset value. For example: INSERT INTO %s (k, m) VALUES (10, ?) where the value for the map m will be map(k, unset()) The value of the map entry can be an unset value only in testing and internal code, It can't happen from client code. unset variables only applies to bind variables in a CQL query. A CQL client can not create an unset ByteBuffer as a map value since it is not a bound value. bq. Queries with CONTAINS or CONTAINS KEY conditions Added tests. {quote} There is also a few use cases that are not tested and that I have not tried: Secondary index queries on collection key or value with unset values {quote} Added tests. bq. UPDATE or DELETE queries with unset values in the WHERE clause Added tests. {quote} Nested tulpe with unset values It looks like you missed the following remark from Sylvain In ModificationStatement.executeInternal, the body of the for loop should just be replaced by mutation.apply(). {quote} I didn't replace the for loop body since the {{apply()}} method is not in the {{IMutation}} interface. The {{apply()}} method signature is different in {{Mutation}} and {{CounterMutation}}, one is {{void}} while the other returns a {{Mutation}} instance. I chose to leave it as-is. bq. I do not understand your change in FunctionCall. We cannot know if some function can accept null or not as somebody can create a UDF for which null is a valid input. For unset value, we need to block them in FunctionCall as the existing functions will break otherwise. Since functions do not accept bind variables as input, only column identifiers, and A column value can not be an unset value. I added a comment to {{FunctionCall}} stating why there is no need in checking for unset variables in functions. {quote} The error messages for tuples and UDT do not provides enough information if you have multiple of them in the query (e.g. SELECT * FROM myTable WHERE a = 0 AND (b, c) = (?, ?) AND (d, e) (?, ?)). I am not sure how we could provide a better message but you might be able to find a way? At least for UDT we should provide the type name in the error message. {quote} I added more information to the bind marker position, and added a test. bq. In Sets.Adder.doAdd, Lists.Appender.doAppend and Maps.Appender.doAppend there are some unused variable. Done. bq. In Sets, Lists, Maps and Constants there are several place where you use some unecessary else. The if either end by a return or by throwing an Exception. I think it's a matter of taste. I changed it and remvoed the unecessary else. bq. I would be in favor to put the tests in the corresponding unit tests rather than in a new one. For example I will put the collection tests in CollectionsTest. I believe that it will help people if all the tests for a collections for example are together. It can serve as a form of documentation. Done. I moved tests to CollectionsTest, TupleTypeTest, UserTypesTest. bq. You should add the feature to the News.txt Done. bq. There are still a lot of whitespaces in your patch. Fixed. Ability to distinguish between NULL and UNSET values in Prepared Statements --- Key: CASSANDRA-7304 URL: https://issues.apache.org/jira/browse/CASSANDRA-7304 Project: Cassandra Issue Type: Sub-task Reporter: Drew Kutcharian Assignee: Oded Peer Labels: cql, protocolv4 Fix For: 3.0 Attachments: 7304-03.patch, 7304-04.patch, 7304-05.patch, 7304-06.patch, 7304-2.patch, 7304.patch Currently Cassandra inserts tombstones when a value of a column is bound to NULL in a prepared statement. At higher insert rates managing all these tombstones becomes an unnecessary overhead. This limits the usefulness of the prepared statements since developers have to either create multiple prepared statements (each with a different combination of column names, which at times is just unfeasible because of the sheer number of possible combinations) or fall back to using regular (non-prepared) statements. This JIRA is here to explore the possibility of either: A. Have a flag on prepared statements that once set, tells Cassandra to ignore null columns or B. Have an UNSET value which makes Cassandra skip the null columns and not tombstone them Basically, in the context
[jira] [Commented] (CASSANDRA-9126) java.lang.RuntimeException: Last written key DecoratedKey = current key DecoratedKey
[ https://issues.apache.org/jira/browse/CASSANDRA-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492122#comment-14492122 ] Benedict commented on CASSANDRA-9126: - [~yukim]? java.lang.RuntimeException: Last written key DecoratedKey = current key DecoratedKey - Key: CASSANDRA-9126 URL: https://issues.apache.org/jira/browse/CASSANDRA-9126 Project: Cassandra Issue Type: Bug Components: Core Reporter: srinivasu gottipati Priority: Critical Fix For: 2.0.15 Cassandra V: 2.0.14, Getting the following exceptions while trying to compact (I see this issue was raised in earlier versions and marked as closed. However it still appears in 2.0.14). In our case, compaction is not getting succeeded and keep failing with this error.: {code}java.lang.RuntimeException: Last written key DecoratedKey(3462767860784856708, 354038323137333038305f3330325f31355f474d4543454f) = current key DecoratedKey(3462334604624154281, 354036333036353334315f3336315f31355f474d4543454f) writing into {code} ... Stacktrace:{code} at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:143) at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:166) at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:167) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745){code} Any help is greatly appreciated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9126) java.lang.RuntimeException: Last written key DecoratedKey = current key DecoratedKey
[ https://issues.apache.org/jira/browse/CASSANDRA-9126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492278#comment-14492278 ] Marcus Eriksson commented on CASSANDRA-9126: [~sgottipati] did you get any errors while scrubbing? are you using LCS? anything special with your workload? java.lang.RuntimeException: Last written key DecoratedKey = current key DecoratedKey - Key: CASSANDRA-9126 URL: https://issues.apache.org/jira/browse/CASSANDRA-9126 Project: Cassandra Issue Type: Bug Components: Core Reporter: srinivasu gottipati Priority: Critical Fix For: 2.0.15 Cassandra V: 2.0.14, Getting the following exceptions while trying to compact (I see this issue was raised in earlier versions and marked as closed. However it still appears in 2.0.14). In our case, compaction is not getting succeeded and keep failing with this error.: {code}java.lang.RuntimeException: Last written key DecoratedKey(3462767860784856708, 354038323137333038305f3330325f31355f474d4543454f) = current key DecoratedKey(3462334604624154281, 354036333036353334315f3336315f31355f474d4543454f) writing into {code} ... Stacktrace:{code} at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:143) at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:166) at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:167) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:198) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745){code} Any help is greatly appreciated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9148) Issue when modifying UDT
[ https://issues.apache.org/jira/browse/CASSANDRA-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-9148: -- Assignee: Jeff Jirsa (was: Benjamin Lerer) Issue when modifying UDT Key: CASSANDRA-9148 URL: https://issues.apache.org/jira/browse/CASSANDRA-9148 Project: Cassandra Issue Type: Bug Components: Core Reporter: Oskar Kjellin Assignee: Jeff Jirsa Fix For: 2.1.5 Attachments: 9148-2.1.txt I'm trying out the user defined types but ran into some issues when adding a column to an existing type. Unfortunately I had to scrap the entire cluster so I cannot access it any more. After creating the UDT i adde two tables using it. 1 was just using frozentype. The other was using both frozentype frozen mapString, type. Then I realized I needed to add a new field to the user type. Then when I tried to put to any of the two tables (setting all fields to the UDT in the datastax java driver) I got this error message that I could not find anywhere else but in the cassandra code: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid remaining data after end of UDT value I had to scrap my keyspace in order to be able to use it again. Could not even drop one of the tables. I know that they are frozen so we cannot modify the value of individual fields once they are written but we must be able to modify the schema right? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9148) Issue when modifying UDT
[ https://issues.apache.org/jira/browse/CASSANDRA-9148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-9148: -- Reviewer: Benjamin Lerer Issue when modifying UDT Key: CASSANDRA-9148 URL: https://issues.apache.org/jira/browse/CASSANDRA-9148 Project: Cassandra Issue Type: Bug Components: Core Reporter: Oskar Kjellin Assignee: Jeff Jirsa Fix For: 2.1.5 Attachments: 9148-2.1.txt I'm trying out the user defined types but ran into some issues when adding a column to an existing type. Unfortunately I had to scrap the entire cluster so I cannot access it any more. After creating the UDT i adde two tables using it. 1 was just using frozentype. The other was using both frozentype frozen mapString, type. Then I realized I needed to add a new field to the user type. Then when I tried to put to any of the two tables (setting all fields to the UDT in the datastax java driver) I got this error message that I could not find anywhere else but in the cassandra code: com.datastax.driver.core.exceptions.InvalidQueryException: Invalid remaining data after end of UDT value I had to scrap my keyspace in order to be able to use it again. Could not even drop one of the tables. I know that they are frozen so we cannot modify the value of individual fields once they are written but we must be able to modify the schema right? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7555) Support copy and link for commitlog archiving without forking the jvm
[ https://issues.apache.org/jira/browse/CASSANDRA-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492463#comment-14492463 ] Branimir Lambov commented on CASSANDRA-7555: Is this ready for review? If it targets 2.1, I think the hard and soft link options should be removed. I am not sure that the soft link makes sense for 3.0 either as the commit log will delete the file if archiving was successful. Support copy and link for commitlog archiving without forking the jvm - Key: CASSANDRA-7555 URL: https://issues.apache.org/jira/browse/CASSANDRA-7555 Project: Cassandra Issue Type: Improvement Reporter: Nick Bailey Assignee: Joshua McKenzie Priority: Minor Fix For: 2.1.5 Right now for commitlog archiving the user specifies a command to run and c* forks the jvm to run that command. The most common operations will be either copy or link (hard or soft). Since we can do all of these operations without forking the jvm, which is very expensive, we should have special cases for those. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9095) Compressed commit log should measure compressed space used
[ https://issues.apache.org/jira/browse/CASSANDRA-9095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492429#comment-14492429 ] Branimir Lambov commented on CASSANDRA-9095: Not really, i.e. a fix for this will not help the test failure. This bug is about the internal CL size tracking which influences when a flush is initiated. AFAICS the test is failing because the actual file on disk has a size that is different from expected. The real size is not that easy to predict as it depends on the contents of the file. Using a fixed seed for the stress call could help there, but I don't know if/how this can be done. Compressed commit log should measure compressed space used -- Key: CASSANDRA-9095 URL: https://issues.apache.org/jira/browse/CASSANDRA-9095 Project: Cassandra Issue Type: Improvement Reporter: Branimir Lambov Assignee: Branimir Lambov Priority: Minor The commit log compression option introduced in CASSANDRA-6809 does not change the way space use is measure by the commitlog, meaning that it still measures the amount of space taken by the log segments before compression. It should measure the space taken on disk which in this case is the compressed amount. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9104) Unit test failures, trunk + Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492342#comment-14492342 ] Branimir Lambov commented on CASSANDRA-9104: {{KeyCacheTest 180}}: This is too brittle; the isWindows() check and choice in the static initialization section in SSTableRewriter should be moved to DatabaseDescriptor.applyConfig() so that getSSTablePreempiveOpenIntervalInMB() returns the OS-adjusted value. {{SSTableRewriter 283}}: Is this the right bail-out check? What happens if preemptive open is enabled, but has a large interval that wasn't reached yet? Could we also add the Linux test you did, using SSTableRewriter.overrideOpenInterval? {{CommitLogSegmentManager 471}}: Is this change necessary? bq. The recover() path was sneaking in between creation of a CommitLogSegment and addition of that segment to the CLQ inside CLSM, so when we got the list of files in the folder and attempted to filter them to just unmanaged files, that segment was considered unmanaged. We go through recovery, we try to delete the file, Windows barfs since it's memory-mapped. This sounds like a real bug, and it should show up (in another form) on Linux as well. The file should not be deleted; otherwise subsequent writes to the commit log will break. During normal commitlog startup the CLSM.createReserveSegments flag is used specifically to avoid this. During tests CL.resetUnsafe() does not clear it, and even calls CLSM.wakeManager() which makes the problem much more likely. The proper fix is to clear the flag at the start of CLSM.stopUnsafe() and only call CLSM.enableReserveSegmentCreation() if the reset is not to be followed by CL.recover() (which finishes with that call). In either case CLSM.wakeManager() shouldn't be called directly. Unit test failures, trunk + Windows --- Key: CASSANDRA-9104 URL: https://issues.apache.org/jira/browse/CASSANDRA-9104 Project: Cassandra Issue Type: Test Reporter: Joshua McKenzie Assignee: Joshua McKenzie Labels: Windows Fix For: 3.0 Attachments: 9104_CFSTest.txt, 9104_KeyCache.txt, 9104_RecoveryManager.txt, 9104_ScrubTest.txt Variety of different test failures have cropped up over the past 2-3 weeks: h6. -org.apache.cassandra.cql3.UFTest FAILED (timeout)- // No longer failing / timing out h6. testLoadNewSSTablesAvoidsOverwrites(org.apache.cassandra.db.ColumnFamilyStoreTest): FAILED {noformat} 12 SSTables unexpectedly exist junit.framework.AssertionFailedError: 12 SSTables unexpectedly exist at org.apache.cassandra.db.ColumnFamilyStoreTest.testLoadNewSSTablesAvoidsOverwrites(ColumnFamilyStoreTest.java:1896) {noformat} h6. org.apache.cassandra.db.KeyCacheTest FAILED {noformat} expected:4 but was:2 junit.framework.AssertionFailedError: expected:4 but was:2 at org.apache.cassandra.db.KeyCacheTest.assertKeyCacheSize(KeyCacheTest.java:221) at org.apache.cassandra.db.KeyCacheTest.testKeyCache(KeyCacheTest.java:181) {noformat} h6. RecoveryManagerTest: {noformat} org.apache.cassandra.db.RecoveryManagerTest FAILED org.apache.cassandra.db.RecoveryManager2Test FAILED org.apache.cassandra.db.RecoveryManager3Test FAILED org.apache.cassandra.db.RecoveryManagerTruncateTest FAILED All are the following: java.nio.file.AccessDeniedException: build\test\cassandra\commitlog;0\CommitLog-5-1427995105229.log FSWriteError in build\test\cassandra\commitlog;0\CommitLog-5-1427995105229.log at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:128) at org.apache.cassandra.db.commitlog.CommitLogSegmentManager.recycleSegment(CommitLogSegmentManager.java:360) at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:156) at org.apache.cassandra.db.RecoveryManagerTest.testNothingToRecover(RecoveryManagerTest.java:75) Caused by: java.nio.file.AccessDeniedException: build\test\cassandra\commitlog;0\CommitLog-5-1427995105229.log at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102) at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269) at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103) at java.nio.file.Files.delete(Files.java:1079) at org.apache.cassandra.io.util.FileUtils.deleteWithConfirm(FileUtils.java:124) {noformat} h6. testScrubCorruptedCounterRow(org.apache.cassandra.db.ScrubTest): FAILED {noformat} Expecting new size
[jira] [Comment Edited] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492338#comment-14492338 ] Benjamin Lerer edited comment on CASSANDRA-7304 at 4/13/15 2:05 PM: {quote} Since functions do not accept bind variables as input, only column identifiers, and A column value can not be an unset value. I added a comment to FunctionCall stating why there is no need in checking for unset variables in functions. {quote} Functions do accept bind variables. You can check in the {{Cql.g}} file or with the java driver. was (Author: blerer): {quote} Since functions do not accept bind variables as input, only column identifiers, and A column value can not be an unset value. I added a comment to FunctionCall stating why there is no need in checking for unset variables in functions. {quote} Function do accept bind variables. You can check in the {{Cql.g}} file or with the java driver. Ability to distinguish between NULL and UNSET values in Prepared Statements --- Key: CASSANDRA-7304 URL: https://issues.apache.org/jira/browse/CASSANDRA-7304 Project: Cassandra Issue Type: Sub-task Reporter: Drew Kutcharian Assignee: Oded Peer Labels: cql, protocolv4 Fix For: 3.0 Attachments: 7304-03.patch, 7304-04.patch, 7304-05.patch, 7304-06.patch, 7304-2.patch, 7304.patch Currently Cassandra inserts tombstones when a value of a column is bound to NULL in a prepared statement. At higher insert rates managing all these tombstones becomes an unnecessary overhead. This limits the usefulness of the prepared statements since developers have to either create multiple prepared statements (each with a different combination of column names, which at times is just unfeasible because of the sheer number of possible combinations) or fall back to using regular (non-prepared) statements. This JIRA is here to explore the possibility of either: A. Have a flag on prepared statements that once set, tells Cassandra to ignore null columns or B. Have an UNSET value which makes Cassandra skip the null columns and not tombstone them Basically, in the context of a prepared statement, a null value means delete, but we don’t have anything that means ignore (besides creating a new prepared statement without the ignored column). Please refer to the original conversation on DataStax Java Driver mailing list for more background: https://groups.google.com/a/lists.datastax.com/d/topic/java-driver-user/cHE3OOSIXBU/discussion *EDIT 18/12/14 - [~odpeer] Implementation Notes:* The motivation hasn't changed. Protocol version 4 specifies that bind variables do not require having a value when executing a statement. Bind variables without a value are called 'unset'. The 'unset' bind variable is serialized as the int value '-2' without following bytes. \\ \\ * An unset bind variable in an EXECUTE or BATCH request ** On a {{value}} does not modify the value and does not create a tombstone ** On the {{ttl}} clause is treated as 'unlimited' ** On the {{timestamp}} clause is treated as 'now' ** On a map key or a list index throws {{InvalidRequestException}} ** On a {{counter}} increment or decrement operation does not change the counter value, e.g. {{UPDATE my_tab SET c = c - ? WHERE k = 1}} does change the value of counter {{c}} ** On a tuple field or UDT field throws {{InvalidRequestException}} * An unset bind variable in a QUERY request ** On a partition column, clustering column or index column in the {{WHERE}} clause throws {{InvalidRequestException}} ** On the {{limit}} clause is treated as 'unlimited' -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8984) Introduce Transactional API for behaviours that can corrupt system state
[ https://issues.apache.org/jira/browse/CASSANDRA-8984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492366#comment-14492366 ] Joshua McKenzie commented on CASSANDRA-8984: Your responses to prove it are solid. I'm +1 w/nits above addressed. Glad to hear that you appreciate the prove it style of back-and-forth - I think all of us could benefit from some more of that. :) Introduce Transactional API for behaviours that can corrupt system state Key: CASSANDRA-8984 URL: https://issues.apache.org/jira/browse/CASSANDRA-8984 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Fix For: 2.1.5 Attachments: 8984_windows_timeout.txt As a penultimate (and probably final for 2.1, if we agree to introduce it there) round of changes to the internals managing sstable writing, I've introduced a new API called Transactional that I hope will make it much easier to write correct behaviour. As things stand we conflate a lot of behaviours into methods like close - the recent changes unpicked some of these, but didn't go far enough. My proposal here introduces an interface designed to support four actions (on top of their normal function): * prepareToCommit * commit * abort * cleanup In normal operation, once we have finished constructing a state change we call prepareToCommit; once all such state changes are prepared, we call commit. If at any point everything fails, abort is called. In _either_ case, cleanup is called at the very last. These transactional objects are all AutoCloseable, with the behaviour being to rollback any changes unless commit has completed successfully. The changes are actually less invasive than it might sound, since we did recently introduce abort in some places, as well as have commit like methods. This simply formalises the behaviour, and makes it consistent between all objects that interact in this way. Much of the code change is boilerplate, such as moving an object into a try-declaration, although the change is still non-trivial. What it _does_ do is eliminate a _lot_ of special casing that we have had since 2.1 was released. The data tracker API changes and compaction leftover cleanups should finish the job with making this much easier to reason about, but this change I think is worthwhile considering for 2.1, since we've just overhauled this entire area (and not released these changes), and this change is essentially just the finishing touches, so the risk is minimal and the potential gains reasonably significant. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7304) Ability to distinguish between NULL and UNSET values in Prepared Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492338#comment-14492338 ] Benjamin Lerer commented on CASSANDRA-7304: --- {quote} Since functions do not accept bind variables as input, only column identifiers, and A column value can not be an unset value. I added a comment to FunctionCall stating why there is no need in checking for unset variables in functions. {quote} Function do accept bind variables. You can check in the {{Cql.g}} file or with the java driver. Ability to distinguish between NULL and UNSET values in Prepared Statements --- Key: CASSANDRA-7304 URL: https://issues.apache.org/jira/browse/CASSANDRA-7304 Project: Cassandra Issue Type: Sub-task Reporter: Drew Kutcharian Assignee: Oded Peer Labels: cql, protocolv4 Fix For: 3.0 Attachments: 7304-03.patch, 7304-04.patch, 7304-05.patch, 7304-06.patch, 7304-2.patch, 7304.patch Currently Cassandra inserts tombstones when a value of a column is bound to NULL in a prepared statement. At higher insert rates managing all these tombstones becomes an unnecessary overhead. This limits the usefulness of the prepared statements since developers have to either create multiple prepared statements (each with a different combination of column names, which at times is just unfeasible because of the sheer number of possible combinations) or fall back to using regular (non-prepared) statements. This JIRA is here to explore the possibility of either: A. Have a flag on prepared statements that once set, tells Cassandra to ignore null columns or B. Have an UNSET value which makes Cassandra skip the null columns and not tombstone them Basically, in the context of a prepared statement, a null value means delete, but we don’t have anything that means ignore (besides creating a new prepared statement without the ignored column). Please refer to the original conversation on DataStax Java Driver mailing list for more background: https://groups.google.com/a/lists.datastax.com/d/topic/java-driver-user/cHE3OOSIXBU/discussion *EDIT 18/12/14 - [~odpeer] Implementation Notes:* The motivation hasn't changed. Protocol version 4 specifies that bind variables do not require having a value when executing a statement. Bind variables without a value are called 'unset'. The 'unset' bind variable is serialized as the int value '-2' without following bytes. \\ \\ * An unset bind variable in an EXECUTE or BATCH request ** On a {{value}} does not modify the value and does not create a tombstone ** On the {{ttl}} clause is treated as 'unlimited' ** On the {{timestamp}} clause is treated as 'now' ** On a map key or a list index throws {{InvalidRequestException}} ** On a {{counter}} increment or decrement operation does not change the counter value, e.g. {{UPDATE my_tab SET c = c - ? WHERE k = 1}} does change the value of counter {{c}} ** On a tuple field or UDT field throws {{InvalidRequestException}} * An unset bind variable in a QUERY request ** On a partition column, clustering column or index column in the {{WHERE}} clause throws {{InvalidRequestException}} ** On the {{limit}} clause is treated as 'unlimited' -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9023) 2.0.13 write timeouts on driver
[ https://issues.apache.org/jira/browse/CASSANDRA-9023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492398#comment-14492398 ] Philip Thompson commented on CASSANDRA-9023: No, this bug is still Open. 2.0.15 is the targeted fix version. 2.0.13 write timeouts on driver --- Key: CASSANDRA-9023 URL: https://issues.apache.org/jira/browse/CASSANDRA-9023 Project: Cassandra Issue Type: Bug Environment: For testing using only Single node hardware configuration as follows: cpu : CPU(s):16 On-line CPU(s) list: 0-15 Thread(s) per core:2 Core(s) per socket:8 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU MHz: 2000.174 L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-15 OS: Linux version 2.6.32-504.8.1.el6.x86_64 (mockbu...@c6b9.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) Disk: There only single disk in Raid i think space is 500 GB used is 5 GB Reporter: anishek Assignee: Ariel Weisberg Fix For: 2.0.15 Attachments: out_system.log Initially asked @ http://www.mail-archive.com/user@cassandra.apache.org/msg41621.html Was suggested to post here. If any more details are required please let me know -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9178) Test exposed JMX methods
Carl Yeksigian created CASSANDRA-9178: - Summary: Test exposed JMX methods Key: CASSANDRA-9178 URL: https://issues.apache.org/jira/browse/CASSANDRA-9178 Project: Cassandra Issue Type: Test Reporter: Carl Yeksigian [~thobbs] added support for JMX testing in dtests, and we have seen issues related to nodetool testing in various different stages of execution. Tests which exercise the different methods which nodetool calls should be added to catch those issues early. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9132) resumable_bootstrap_test can hang
[ https://issues.apache.org/jira/browse/CASSANDRA-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-9132: -- Since Version: 2.0.0 Fix Version/s: 2.0.15 resumable_bootstrap_test can hang - Key: CASSANDRA-9132 URL: https://issues.apache.org/jira/browse/CASSANDRA-9132 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Tyler Hobbs Assignee: Yuki Morishita Fix For: 2.0.15 Attachments: 9132-2.0.txt The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} can hang sometimes. It looks like the following line never completes: {noformat} node3.watch_log_for(Listening for thrift clients...) {noformat} I'm not familiar enough with the recent bootstrap changes to know why that's not happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9132) resumable_bootstrap_test can hang
[ https://issues.apache.org/jira/browse/CASSANDRA-9132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-9132: -- Attachment: 9132-2.0.txt Streaming can be stuck when trying to retry fetching file from downed node. Patch attached not to retry when reading file from socket failed. This goes back to 2.0, so attached patch is for cassandra-2.0 branch. resumable_bootstrap_test can hang - Key: CASSANDRA-9132 URL: https://issues.apache.org/jira/browse/CASSANDRA-9132 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Tyler Hobbs Assignee: Yuki Morishita Fix For: 2.0.15 Attachments: 9132-2.0.txt The {{bootstrap_test.TestBootstrap.resumable_bootstrap_test}} can hang sometimes. It looks like the following line never completes: {noformat} node3.watch_log_for(Listening for thrift clients...) {noformat} I'm not familiar enough with the recent bootstrap changes to know why that's not happening. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-7555) Support copy and link for commitlog archiving without forking the jvm
[ https://issues.apache.org/jira/browse/CASSANDRA-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14492497#comment-14492497 ] Joshua McKenzie commented on CASSANDRA-7555: Cancelled patch and targeted 3.0. I'll remove the soft-linking; I agree that it doesn't make a lot of sense in retrospect. Support copy and link for commitlog archiving without forking the jvm - Key: CASSANDRA-7555 URL: https://issues.apache.org/jira/browse/CASSANDRA-7555 Project: Cassandra Issue Type: Improvement Reporter: Nick Bailey Assignee: Joshua McKenzie Priority: Minor Fix For: 3.0 Right now for commitlog archiving the user specifies a command to run and c* forks the jvm to run that command. The most common operations will be either copy or link (hard or soft). Since we can do all of these operations without forking the jvm, which is very expensive, we should have special cases for those. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-9179) Unable to point in time restore if keyspace has been created
Jon Moses created CASSANDRA-9179: Summary: Unable to point in time restore if keyspace has been created Key: CASSANDRA-9179 URL: https://issues.apache.org/jira/browse/CASSANDRA-9179 Project: Cassandra Issue Type: Bug Reporter: Jon Moses With Cassandra 2.1, and the addition of the CF UUID, the ability to do a point in time restore by restoring a snapshot and replaying commitlogs is lost if the keyspace has been dropped and recreated. When the keyspace is recreated, the cf_id changes, and the commitlog replay mechanism skips the desired mutations as the cf_id no longer matches what's present in the schema. There should exist a way to inform the replay that you want the mutations replayed even if the cf_id doesn't match. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-9179) Unable to point in time restore if table/cf has been recreated
[ https://issues.apache.org/jira/browse/CASSANDRA-9179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Moses updated CASSANDRA-9179: - Description: With Cassandra 2.1, and the addition of the CF UUID, the ability to do a point in time restore by restoring a snapshot and replaying commitlogs is lost if the table has been dropped and recreated. When the table is recreated, the cf_id changes, and the commitlog replay mechanism skips the desired mutations as the cf_id no longer matches what's present in the schema. There should exist a way to inform the replay that you want the mutations replayed even if the cf_id doesn't match. was: With Cassandra 2.1, and the addition of the CF UUID, the ability to do a point in time restore by restoring a snapshot and replaying commitlogs is lost if the keyspace has been dropped and recreated. When the keyspace is recreated, the cf_id changes, and the commitlog replay mechanism skips the desired mutations as the cf_id no longer matches what's present in the schema. There should exist a way to inform the replay that you want the mutations replayed even if the cf_id doesn't match. Summary: Unable to point in time restore if table/cf has been recreated (was: Unable to point in time restore if keyspace has been created) Unable to point in time restore if table/cf has been recreated Key: CASSANDRA-9179 URL: https://issues.apache.org/jira/browse/CASSANDRA-9179 Project: Cassandra Issue Type: Bug Reporter: Jon Moses With Cassandra 2.1, and the addition of the CF UUID, the ability to do a point in time restore by restoring a snapshot and replaying commitlogs is lost if the table has been dropped and recreated. When the table is recreated, the cf_id changes, and the commitlog replay mechanism skips the desired mutations as the cf_id no longer matches what's present in the schema. There should exist a way to inform the replay that you want the mutations replayed even if the cf_id doesn't match. -- This message was sent by Atlassian JIRA (v6.3.4#6332)