[jira] [Commented] (CASSANDRA-12662) OOM when using SASI index
[ https://issues.apache.org/jira/browse/CASSANDRA-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15822705#comment-15822705 ] Caleb Rackliffe commented on CASSANDRA-12662: - [~mkrupits_jb] Have you tried attaching {{strace}} to the running Cassandra process to track down the specific reason for the mapping failure? For instance, the {{ENOMEM}} error code could just mean Cassandra is mapping too many files. (The limit is usually enforced by the kernel in {{/proc/sys/vm/max_map_count}}.) > OOM when using SASI index > - > > Key: CASSANDRA-12662 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12662 > Project: Cassandra > Issue Type: Bug > Environment: Linux, 4 CPU cores, 16Gb RAM, Cassandra process utilizes > ~8Gb, of which ~4Gb is Java heap >Reporter: Maxim Podkolzine >Priority: Critical > Fix For: 3.x > > Attachments: memory-dump.png > > > 2.8Gb of the heap is taken by the index data, pending for flush (see the > screenshot). As a result the node fails with OOM. > Questions: > - Why can't Cassandra keep up with the inserted data and flush it? > - What resources/configuration should be changed to improve the performance? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13015) improved compactions metrics
[ https://issues.apache.org/jira/browse/CASSANDRA-13015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jon Haddad updated CASSANDRA-13015: --- Reviewer: Jason Brown > improved compactions metrics > > > Key: CASSANDRA-13015 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13015 > Project: Cassandra > Issue Type: Improvement >Reporter: Jon Haddad >Assignee: Jon Haddad > Fix For: 4.0 > > > Compaction stats is missing some useful metrics. > * Number of sstables dropped out of compaction due to disk space > * Number of compactions that had to drop sstables > * Compactions aborted (due to reduced scope failure) > There will be more, I'll edit the description with the list as I figure it > out. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-11983) Migration task failed to complete
[ https://issues.apache.org/jira/browse/CASSANDRA-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-11983: --- Fix Version/s: 3.x 3.0.x > Migration task failed to complete > - > > Key: CASSANDRA-11983 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11983 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle > Environment: Docker / Kubernetes running > Linux cassandra-21 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) > x86_64 GNU/Linux > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-1~bpo8+1-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) > Cassnadra 3.5 installed from > deb-src http://www.apache.org/dist/cassandra/debian 35x main >Reporter: Chris Love >Assignee: Jeff Jirsa > Fix For: 3.0.x, 3.x > > Attachments: cass.log > > > When nodes are boostrapping I am getting mulitple errors: "Migration task > failed to complete", from MigrationManager.java > The errors increase as more nodes are added to the ring, as I am creating a > ring of 1k nodes. > Cassandra yaml i here > https://github.com/k8s-for-greeks/gpmr/blob/3d50ff91a139b9c4a7a26eda0fb4dcf9a008fbed/pet-race-devops/docker/cassandra-debian/files/cassandra.yaml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-12617: --- Fix Version/s: (was: 3.10) 3.x > dtest failure in > offline_tools_test.TestOfflineTools.sstableofflinerelevel_test > --- > > Key: CASSANDRA-12617 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12617 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Carl Yeksigian > Labels: dtest, test-failure > Fix For: 3.x > > Attachments: node1_debug.log, node1_gc.log, node1.log > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/ > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in > sstableofflinerelevel_test > self.assertGreater(max(final_levels), 1) > File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater > self.fail(self._formatMessage(msg, standardMsg)) > File "/usr/lib/python2.7/unittest/case.py", line 410, in fail > raise self.failureException(msg) > "1 not greater than 1 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12617) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-12617: --- Priority: Major (was: Blocker) > dtest failure in > offline_tools_test.TestOfflineTools.sstableofflinerelevel_test > --- > > Key: CASSANDRA-12617 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12617 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy >Assignee: Carl Yeksigian > Labels: dtest, test-failure > Fix For: 3.x > > Attachments: node1_debug.log, node1_gc.log, node1.log > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/391/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test/ > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/offline_tools_test.py", line 212, in > sstableofflinerelevel_test > self.assertGreater(max(final_levels), 1) > File "/usr/lib/python2.7/unittest/case.py", line 942, in assertGreater > self.fail(self._formatMessage(msg, standardMsg)) > File "/usr/lib/python2.7/unittest/case.py", line 410, in fail > raise self.failureException(msg) > "1 not greater than 1 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-13025) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.static_columns_with_distinct_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13025: --- Fix Version/s: (was: 3.0.11) (was: 3.10) 3.x 3.0.x > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.static_columns_with_distinct_test > > > Key: CASSANDRA-13025 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13025 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest, test-failure > Fix For: 3.0.x, 3.x > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/28/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x/static_columns_with_distinct_test > {code} > Error Message > > {code}{code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in > wrapped > f(obj) > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 4010, in static_columns_with_distinct_test > rows = list(cursor.execute("SELECT DISTINCT k, s1 FROM test2")) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > {code}{code} > Standard Output > http://git-wip-us.apache.org/repos/asf/cassandra.git > git:7eac22dd41cb09e6d64fb5ac48b2cca3c8840cc8 > Unexpected error in node2 log, error: > ERROR [Native-Transport-Requests-2] 2016-12-08 03:20:04,861 Message.java:617 > - Unexpected exception during request; channel = [id: 0xf4c13f2c, > L:/127.0.0.2:9042 - R:/127.0.0.1:52112] > java.io.IOError: java.io.IOException: Corrupt empty row found in unfiltered > partition > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:224) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:212) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:779) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:741) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:408) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:273) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:219) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:204) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513) > [apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407) > [apache-cassandra-3.9.jar:3.9] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java
[jira] [Updated] (CASSANDRA-13025) dtest failure in upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.static_columns_with_distinct_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13025: --- Priority: Major (was: Blocker) > dtest failure in > upgrade_tests.cql_tests.TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x.static_columns_with_distinct_test > > > Key: CASSANDRA-13025 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13025 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest, test-failure > Fix For: 3.0.x, 3.x > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_dtest_upgrade/28/testReport/upgrade_tests.cql_tests/TestCQLNodes3RF3_Upgrade_current_3_x_To_indev_3_x/static_columns_with_distinct_test > {code} > Error Message > > {code}{code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/tools/decorators.py", line 46, in > wrapped > f(obj) > File "/home/automaton/cassandra-dtest/upgrade_tests/cql_tests.py", line > 4010, in static_columns_with_distinct_test > rows = list(cursor.execute("SELECT DISTINCT k, s1 FROM test2")) > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 1998, in execute > return self.execute_async(query, parameters, trace, custom_payload, > timeout, execution_profile, paging_state).result() > File "/home/automaton/src/cassandra-driver/cassandra/cluster.py", line > 3784, in result > raise self._final_exception > {code}{code} > Standard Output > http://git-wip-us.apache.org/repos/asf/cassandra.git > git:7eac22dd41cb09e6d64fb5ac48b2cca3c8840cc8 > Unexpected error in node2 log, error: > ERROR [Native-Transport-Requests-2] 2016-12-08 03:20:04,861 Message.java:617 > - Unexpected exception during request; channel = [id: 0xf4c13f2c, > L:/127.0.0.2:9042 - R:/127.0.0.1:52112] > java.io.IOError: java.io.IOException: Corrupt empty row found in unfiltered > partition > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:224) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer$1.computeNext(UnfilteredRowIteratorSerializer.java:212) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:133) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.processPartition(SelectStatement.java:779) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:741) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:408) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:273) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:232) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:76) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.processStatement(QueryProcessor.java:188) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:219) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.cql3.QueryProcessor.process(QueryProcessor.java:204) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.messages.QueryMessage.execute(QueryMessage.java:115) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:513) > [apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:407) > [apache-cassandra-3.9.jar:3.9] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.Abs
[jira] [Updated] (CASSANDRA-13058) dtest failure in hintedhandoff_test.TestHintedHandoff.hintedhandoff_decom_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13058: --- Priority: Major (was: Blocker) > dtest failure in hintedhandoff_test.TestHintedHandoff.hintedhandoff_decom_test > -- > > Key: CASSANDRA-13058 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13058 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Sean McCarthy >Assignee: Stefan Podkowinski > Labels: dtest, test-failure > Fix For: 3.x > > Attachments: 13058-3.x.patch, node1_debug.log, node1_gc.log, > node1.log, node2_debug.log, node2_gc.log, node2.log, node3_debug.log, > node3_gc.log, node3.log, node4_debug.log, node4_gc.log, node4.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_novnode_dtest/16/testReport/hintedhandoff_test/TestHintedHandoff/hintedhandoff_decom_test/ > {code} > Error Message > Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['decommission']] > exited with non-zero status; exit status: 2; > stderr: error: Error while decommissioning node: Failed to transfer all hints > to 59f20b4f-0215-4e18-be1b-7e00f2901629 > {code}{code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/hintedhandoff_test.py", line 167, in > hintedhandoff_decom_test > node1.decommission() > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1314, in > decommission > self.nodetool("decommission") > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 783, in > nodetool > return handle_external_tool_process(p, ['nodetool', '-h', 'localhost', > '-p', str(self.jmx_port), cmd.split()]) > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1993, in > handle_external_tool_process > raise ToolError(cmd_args, rc, out, err) > {code}{code} > java.lang.RuntimeException: Error while decommissioning node: Failed to > transfer all hints to 59f20b4f-0215-4e18-be1b-7e00f2901629 > at > org.apache.cassandra.service.StorageService.decommission(StorageService.java:3924) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) > at > com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) > at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) > at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) > at > javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1466) > at > javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76) > at > javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307) > at > javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399) > at > javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:828) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323) > at sun.rmi.transport.Transport$1.run(Transport.java:200) > at sun.rmi.transport.Transport$1.run(Transport.java:197) > at java.security.AccessController.doPrivileged(Native Method) > at sun.rmi.
[jira] [Updated] (CASSANDRA-13058) dtest failure in hintedhandoff_test.TestHintedHandoff.hintedhandoff_decom_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-13058: --- Fix Version/s: (was: 3.10) 3.x > dtest failure in hintedhandoff_test.TestHintedHandoff.hintedhandoff_decom_test > -- > > Key: CASSANDRA-13058 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13058 > Project: Cassandra > Issue Type: Test > Components: Testing >Reporter: Sean McCarthy >Assignee: Stefan Podkowinski > Labels: dtest, test-failure > Fix For: 3.x > > Attachments: 13058-3.x.patch, node1_debug.log, node1_gc.log, > node1.log, node2_debug.log, node2_gc.log, node2.log, node3_debug.log, > node3_gc.log, node3.log, node4_debug.log, node4_gc.log, node4.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.X_novnode_dtest/16/testReport/hintedhandoff_test/TestHintedHandoff/hintedhandoff_decom_test/ > {code} > Error Message > Subprocess ['nodetool', '-h', 'localhost', '-p', '7100', ['decommission']] > exited with non-zero status; exit status: 2; > stderr: error: Error while decommissioning node: Failed to transfer all hints > to 59f20b4f-0215-4e18-be1b-7e00f2901629 > {code}{code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/hintedhandoff_test.py", line 167, in > hintedhandoff_decom_test > node1.decommission() > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1314, in > decommission > self.nodetool("decommission") > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 783, in > nodetool > return handle_external_tool_process(p, ['nodetool', '-h', 'localhost', > '-p', str(self.jmx_port), cmd.split()]) > File "/usr/local/lib/python2.7/dist-packages/ccmlib/node.py", line 1993, in > handle_external_tool_process > raise ToolError(cmd_args, rc, out, err) > {code}{code} > java.lang.RuntimeException: Error while decommissioning node: Failed to > transfer all hints to 59f20b4f-0215-4e18-be1b-7e00f2901629 > at > org.apache.cassandra.service.StorageService.decommission(StorageService.java:3924) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) > at > com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) > at > com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) > at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) > at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) > at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) > at > com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) > at > javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1466) > at > javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76) > at > javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307) > at > javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399) > at > javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:828) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:323) > at sun.rmi.transport.Transport$1.run(Transport.java:200) > at sun.rmi.transport.Transport$1.run(Transport.java:197) > at java.security.AccessController.doPrivileged(Native Metho
[cassandra] Git Push Summary
Repository: cassandra Updated Tags: refs/tags/3.10-tentative [created] 9c2ab2555
[jira] [Updated] (CASSANDRA-6719) redesign loadnewsstables
[ https://issues.apache.org/jira/browse/CASSANDRA-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Brown updated CASSANDRA-6719: --- Assignee: Bhaskar Muppana > redesign loadnewsstables > > > Key: CASSANDRA-6719 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6719 > Project: Cassandra > Issue Type: New Feature > Components: Tools >Reporter: Jonathan Ellis >Assignee: Bhaskar Muppana >Priority: Minor > Labels: lhf > Fix For: 3.x > > Attachments: 6719.patch > > > CFSMBean.loadNewSSTables scans data directories for new sstables dropped > there by an external agent. This is dangerous because of possible filename > conflicts with existing or newly generated sstables. > Instead, we should support leaving the new sstables in a separate directory > (specified by a parameter, or configured as a new location in yaml) and take > care of renaming as necessary automagically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11936) Add support for + and - operations on dates
[ https://issues.apache.org/jira/browse/CASSANDRA-11936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15822484#comment-15822484 ] Andy Tolbert commented on CASSANDRA-11936: -- A couple questions about this from experimenting with the branch: 1. Should it also work for timeuuid? 2. Does {{WHERE reading_time < now() - 2h}} work yet or will that be in another ticket? The now() function is documented as generating a timeuuid, although maybe it's contextual by type? When I try it I get an error {{the '-' operation is not supported between now() and 2h}}. > Add support for + and - operations on dates > --- > > Key: CASSANDRA-11936 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11936 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 3.x > > > For time series it can be interesting to allow queries with {{WHERE}} clause > like: {{... WHERE reading_time < now() - 2h}} > In order to do that we need to add support for: {{+}} and {{-}} operation > with date. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12203) AssertionError on compaction after upgrade (2.1.9 -> 3.7)
[ https://issues.apache.org/jira/browse/CASSANDRA-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15822422#comment-15822422 ] Yuki Morishita commented on CASSANDRA-12203: Thanks for taking a look at this. Changed assertion as suggested. ||branch||testall||dtest|| |[12203-3.0|https://github.com/yukim/cassandra/tree/12203-3.0]|[testall|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-12203-3.0-testall/lastCompletedBuild/testReport/]|[dtest|http://cassci.datastax.com/view/Dev/view/yukim/job/yukim-12203-3.0-dtest/lastCompletedBuild/testReport/]| Tests seem good. > AssertionError on compaction after upgrade (2.1.9 -> 3.7) > - > > Key: CASSANDRA-12203 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12203 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Cassandra 3.7 (upgrade from 2.1.9) > Java version "1.8.0_91" > Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-83-generic x86_64) >Reporter: Roman S. Borschel >Assignee: Yuki Morishita >Priority: Critical > Fix For: 3.0.x, 3.x > > > After upgrading a Cassandra cluster from 2.1.9 to 3.7, one column family > (using SizeTieredCompaction) repeatedly and continuously failed compaction > (and thus also repair) across the cluster, with all nodes producing the > following errors in the logs: > {noformat} > 016-07-14T09:29:47.96855 |srv=cassandra|ERROR: Exception in thread > Thread[CompactionExecutor:3,1,main] > 2016-07-14T09:29:47.96858 |srv=cassandra|java.lang.AssertionError: null > 2016-07-14T09:29:47.96859 |srv=cassandra| at > org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$TombstoneTracker.openNew(UnfilteredDeserializer.java:650) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96860 |srv=cassandra| at > org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer$UnfilteredIterator.hasNext(UnfilteredDeserializer.java:423) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96860 |srv=cassandra| at > org.apache.cassandra.db.UnfilteredDeserializer$OldFormatDeserializer.hasNext(UnfilteredDeserializer.java:298) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96860 |srv=cassandra| at > org.apache.cassandra.io.sstable.SSTableSimpleIterator$OldFormatIterator.readStaticRow(SSTableSimpleIterator.java:133) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96861 |srv=cassandra| at > org.apache.cassandra.io.sstable.SSTableIdentityIterator.(SSTableIdentityIterator.java:57) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96861 |srv=cassandra| at > org.apache.cassandra.io.sstable.format.big.BigTableScanner$KeyScanningIterator$1.initializeIterator(BigTableScanner.java:334) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96862 |srv=cassandra| at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.maybeInit(LazilyInitializedUnfilteredRowIterator.java:48) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96862 |srv=cassandra| at > org.apache.cassandra.db.rows.LazilyInitializedUnfilteredRowIterator.isReverseOrder(LazilyInitializedUnfilteredRowIterator.java:70) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96863 |srv=cassandra| at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:109) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96863 |srv=cassandra| at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$1.reduce(UnfilteredPartitionIterators.java:100) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96864 |srv=cassandra| at > org.apache.cassandra.utils.MergeIterator$Candidate.consume(MergeIterator.java:408) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96864 |srv=cassandra| at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:203) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96865 |srv=cassandra| at > org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:156) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96865 |srv=cassandra| at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96866 |srv=cassandra| at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$2.hasNext(UnfilteredPartitionIterators.java:150) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96866 |srv=cassandra| at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:72) > ~[apache-cassandra-3.7.jar:3.7] > 2016-07-14T09:29:47.96867 |srv=cassandra| at > org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226) > ~[apac
[jira] [Commented] (CASSANDRA-12850) Add the duration type to the protocol specifications
[ https://issues.apache.org/jira/browse/CASSANDRA-12850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15822277#comment-15822277 ] Sandeep Tamhankar commented on CASSANDRA-12850: --- Has the work to return the new type in v5 actually been done, and does this ticket just cover updating the v5 spec file? If so, what is the duration type's enum value? From the v4 spec, I'd guess 0x15, but wanted to confirm. > Add the duration type to the protocol specifications > > > Key: CASSANDRA-12850 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12850 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 3.x > > > The Duration type need to be added to the protocol specifications. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-13122) reduce compaction complexity by having the CompactionManager pick the sstables rather than the task
Jon Haddad created CASSANDRA-13122: -- Summary: reduce compaction complexity by having the CompactionManager pick the sstables rather than the task Key: CASSANDRA-13122 URL: https://issues.apache.org/jira/browse/CASSANDRA-13122 Project: Cassandra Issue Type: Improvement Components: Compaction Reporter: Jon Haddad Fix For: 4.x CompactionTasks currently pick their own sstables. This unnecessarily adds the requirement of coding against a mutable list of sstables which are involved in compaction, which adds complexity. I propose we move this logic to the compaction manager, which would hand off the sstable list to the CompactionTask. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-13121) repair progress message breaks legacy JMX support
[ https://issues.apache.org/jira/browse/CASSANDRA-13121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15822076#comment-15822076 ] Scott Bale commented on CASSANDRA-13121: Note also the comment elsewhere in {{RepairRunnable}}: {code} public void onFailure(Throwable t) { /** * If the failure message below is modified, it must also be updated on * {@link org.apache.cassandra.utils.progress.jmx.LegacyJMXProgressSupport} * for backward-compatibility support. */ String message = String.format("Repair session %s for range %s failed with error %s", session.getId(), session.getRange().toString(), t.getMessage()); {code} https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/RepairRunnable.java#L269 > repair progress message breaks legacy JMX support > - > > Key: CASSANDRA-13121 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13121 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Scott Bale >Priority: Minor > > The error progress message in {{RepairRunnable}} is not compliant with the > {{LegacyJMXProgressSupport}} class, which uses a regex to match on the text > of a progress event. Therefore, actual failures slip through as successes if > using legacy JMX for repairs. > In {{RepairRunnable}} > {code} > protected void fireErrorAndComplete(String tag, int progressCount, int > totalProgress, String message) > { > fireProgressEvent(tag, new ProgressEvent(ProgressEventType.ERROR, > progressCount, totalProgress, message)); > fireProgressEvent(tag, new ProgressEvent(ProgressEventType.COMPLETE, > progressCount, totalProgress, String.format("Repair command #%d finished with > error", cmd))); > } > {code} > Note the {{"Repair command #%d finished with error"}} > See > https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/RepairRunnable.java#L109 > In {{LegacyJMXProgressSupport}}: > {code} > protected static final Pattern SESSION_FAILED_MATCHER = > Pattern.compile("Repair session .* for range .* failed with error .*"); > protected static final Pattern SESSION_SUCCESS_MATCHER = > Pattern.compile("Repair session .* for range .* finished"); > {code} > See > https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/progress/jmx/LegacyJMXProgressSupport.java#L38 > Legacy JMX support was introduced for CASSANDRA-11430 (version 2.2.6) and the > bug was introduced as part of CASSANDRA-12279 (version 2.2.8). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-13121) repair progress message breaks legacy JMX support
Scott Bale created CASSANDRA-13121: -- Summary: repair progress message breaks legacy JMX support Key: CASSANDRA-13121 URL: https://issues.apache.org/jira/browse/CASSANDRA-13121 Project: Cassandra Issue Type: Bug Components: Streaming and Messaging Reporter: Scott Bale Priority: Minor The error progress message in {{RepairRunnable}} is not compliant with the {{LegacyJMXProgressSupport}} class, which uses a regex to match on the text of a progress event. Therefore, actual failures slip through as successes if using legacy JMX for repairs. In {{RepairRunnable}} {code} protected void fireErrorAndComplete(String tag, int progressCount, int totalProgress, String message) { fireProgressEvent(tag, new ProgressEvent(ProgressEventType.ERROR, progressCount, totalProgress, message)); fireProgressEvent(tag, new ProgressEvent(ProgressEventType.COMPLETE, progressCount, totalProgress, String.format("Repair command #%d finished with error", cmd))); } {code} Note the {{"Repair command #%d finished with error"}} See https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/repair/RepairRunnable.java#L109 In {{LegacyJMXProgressSupport}}: {code} protected static final Pattern SESSION_FAILED_MATCHER = Pattern.compile("Repair session .* for range .* failed with error .*"); protected static final Pattern SESSION_SUCCESS_MATCHER = Pattern.compile("Repair session .* for range .* finished"); {code} See https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/progress/jmx/LegacyJMXProgressSupport.java#L38 Legacy JMX support was introduced for CASSANDRA-11430 (version 2.2.6) and the bug was introduced as part of CASSANDRA-12279 (version 2.2.8). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-11983) Migration task failed to complete
[ https://issues.apache.org/jira/browse/CASSANDRA-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa reassigned CASSANDRA-11983: -- Assignee: Jeff Jirsa > Migration task failed to complete > - > > Key: CASSANDRA-11983 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11983 > Project: Cassandra > Issue Type: Bug > Components: Lifecycle > Environment: Docker / Kubernetes running > Linux cassandra-21 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) > x86_64 GNU/Linux > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-1~bpo8+1-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) > Cassnadra 3.5 installed from > deb-src http://www.apache.org/dist/cassandra/debian 35x main >Reporter: Chris Love >Assignee: Jeff Jirsa > Attachments: cass.log > > > When nodes are boostrapping I am getting mulitple errors: "Migration task > failed to complete", from MigrationManager.java > The errors increase as more nodes are added to the ring, as I am creating a > ring of 1k nodes. > Cassandra yaml i here > https://github.com/k8s-for-greeks/gpmr/blob/3d50ff91a139b9c4a7a26eda0fb4dcf9a008fbed/pet-race-devops/docker/cassandra-debian/files/cassandra.yaml -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12443) Remove alter type support
[ https://issues.apache.org/jira/browse/CASSANDRA-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-12443: --- Issue Type: Bug (was: Improvement) > Remove alter type support > - > > Key: CASSANDRA-12443 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12443 > Project: Cassandra > Issue Type: Bug >Reporter: Carl Yeksigian >Assignee: Carl Yeksigian > Fix For: 3.x > > > Currently, we allow altering of types. However, because we no longer store > the length for all types anymore, switching from a fixed-width to > variable-width type causes issues. commitlog playback breaking startup, > queries currently in flight getting back bad results, and special casing > required to handle the changes. In addition, this would solve > CASSANDRA-10309, as there is no possibility of the types changing while an > SSTableReader is open. > For fixed-length, compatible types, the alter also doesn't add much over a > cast, so users could use that in order to retrieve the altered type. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-13120) Trace and Histogram output misleading
Adam Hattrell created CASSANDRA-13120: - Summary: Trace and Histogram output misleading Key: CASSANDRA-13120 URL: https://issues.apache.org/jira/browse/CASSANDRA-13120 Project: Cassandra Issue Type: Bug Components: Core Reporter: Adam Hattrell Priority: Minor If we look at the following output: {noformat} [centos@cassandra-c-3]$ nodetool getsstables -- keyspace table 60ea4399-6b9f-4419-9ccb-ff2e6742de10 /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647146-big-Data.db /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647147-big-Data.db /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647145-big-Data.db /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647152-big-Data.db /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-647157-big-Data.db /mnt/cassandra/data/data/keyspace/table-62f30431acf411e69a4ed7dd11246f8a/mc-648137-big-Data.db {noformat} We can see that this key value appears in just 6 sstables. However, when we run a select against the table and key we get: {noformat} Tracing session: a6c81330-d670-11e6-b00b-c1d403fd6e84 activity | timestamp | source | source_elapsed ---+++ Execute CQL3 query | 2017-01-09 13:36:40.419000 | 10.200.254.141 | 0 Parsing SELECT * FROM keyspace.table WHERE id = 60ea4399-6b9f-4419-9ccb-ff2e6742de10; [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 10.200.254.141 |104 Preparing statement [SharedPool-Worker-2] | 2017-01-09 13:36:40.419000 | 10.200.254.141 |220 Executing single-partition query on table [SharedPool-Worker-1]| 2017-01-09 13:36:40.419000 | 10.200.254.141 |450 Acquiring sstable references [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 10.200.254.141 |477 Bloom filter allows skipping sstable 648146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419000 | 10.200.254.141 |496 Bloom filter allows skipping sstable 648145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |503 Key cache hit for sstable 648140 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |513 Bloom filter allows skipping sstable 648135 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |520 Bloom filter allows skipping sstable 648130 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |526 Bloom filter allows skipping sstable 648048 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |530 Bloom filter allows skipping sstable 647749 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |535 Bloom filter allows skipping sstable 647404 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |540 Key cache hit for sstable 647145 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |548 Key cache hit for sstable 647146 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419001 | 10.200.254.141 |556 Key cache hit for sstable 647147 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419002 | 10.200.254.141 |564 Bloom filter allows skipping sstable 647148 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419002 | 10.200.254.141 |570 Bloom filter allows skipping sstable 647149 [SharedPool-Worker-1] | 2017-01-09 13:36:40.419002 | 10.200.254.141 |575 Bloom filter allows s
[jira] [Updated] (CASSANDRA-11887) Duplicate rows after a 2.2.5 to 3.0.4 migration
[ https://issues.apache.org/jira/browse/CASSANDRA-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christian Spriegel updated CASSANDRA-11887: --- Attachment: christianspriegel_schema.txt christianspriegel_query_trace.txt > Duplicate rows after a 2.2.5 to 3.0.4 migration > --- > > Key: CASSANDRA-11887 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11887 > Project: Cassandra > Issue Type: Bug >Reporter: Julien Anguenot >Priority: Blocker > Attachments: christianspriegel_query_trace.txt, > christianspriegel_schema.txt, > post_3.0.9_upgrade_sstabledump_showing_duplicate_row.txt > > > After migrating from 2.2.5 to 3.0.4, some tables seem to carry duplicate > primary keys. > Below an example. Note, repair / scrub of such table do not seem to fix nor > indicate any issues. > *Table definition*: > {code} > CREATE TABLE core.edge_ipsec_vpn_service ( > edge_uuid text PRIMARY KEY, > enabled boolean, > endpoints set>, > tunnels set> > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > *UDTs:* > {code} > CREATE TYPE core.edge_ipsec_vpn_endpoint ( > network text, > public_ip text > ); > CREATE TYPE core.edge_ipsec_vpn_tunnel ( > name text, > description text, > peer_ip_address text, > peer_id text, > local_ip_address text, > local_id text, > local_subnets frozen>>, > peer_subnets frozen>>, > shared_secret text, > shared_secret_encrypted boolean, > encryption_protocol text, > mtu int, > enabled boolean, > operational boolean, > error_details text, > vpn_peer frozen > ); > CREATE TYPE core.edge_ipsec_vpn_subnet ( > name text, > gateway text, > netmask text > ); > CREATE TYPE core.edge_ipsec_vpn_peer ( > type text, > id text, > name text, > vcd_url text, > vcd_org text, > vcd_username text > ); > {code} > sstabledump extract (IP addressees hidden as well as secrets) > {code} > [...] > { > "partition" : { > "key" : [ "84d567cc-0165-4e64-ab97-3a9d06370ba9" ], > "position" : 131146 > }, > "rows" : [ > { > "type" : "row", > "position" : 131236, > "liveness_info" : { "tstamp" : "2016-05-06T17:07:15.416003Z" }, > "cells" : [ > { "name" : "enabled", "value" : "true" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third > party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "" } > ] > }, > { > "type" : "row", > "position" : 131597, > "cells" : [ > { "name" : "endpoints", "path" : [ “XXX” ], "value" : "", "tstamp" > : "2016-03-29T08:05:38.297015Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "", "tstamp" : > "2016-03-29T08:05:38.297015Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-14T18:05:07.262001Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-29T08:05:38.297015Z" } > ] > }, > { > "type" : "row", > "position" : 133644, > "cells" : [ > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-29T07:05:27.213013Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4.7:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-29T07:05:27.213013Z" } > ] > } > ] > }, > [...] > [...] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11887) Duplicate rows after a 2.2.5 to 3.0.4 migration
[ https://issues.apache.org/jira/browse/CASSANDRA-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15821732#comment-15821732 ] Christian Spriegel commented on CASSANDRA-11887: We got the same issue when upgrading from 2.2.x to 3.0.10. I think what is special about this table is the primary key definition 'PRIMARY KEY ("id")'. Perhaps this issue is caused by not having a "column-name"? I attached a query trace and the schema as files. > Duplicate rows after a 2.2.5 to 3.0.4 migration > --- > > Key: CASSANDRA-11887 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11887 > Project: Cassandra > Issue Type: Bug >Reporter: Julien Anguenot >Priority: Blocker > Attachments: post_3.0.9_upgrade_sstabledump_showing_duplicate_row.txt > > > After migrating from 2.2.5 to 3.0.4, some tables seem to carry duplicate > primary keys. > Below an example. Note, repair / scrub of such table do not seem to fix nor > indicate any issues. > *Table definition*: > {code} > CREATE TABLE core.edge_ipsec_vpn_service ( > edge_uuid text PRIMARY KEY, > enabled boolean, > endpoints set>, > tunnels set> > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > {code} > *UDTs:* > {code} > CREATE TYPE core.edge_ipsec_vpn_endpoint ( > network text, > public_ip text > ); > CREATE TYPE core.edge_ipsec_vpn_tunnel ( > name text, > description text, > peer_ip_address text, > peer_id text, > local_ip_address text, > local_id text, > local_subnets frozen>>, > peer_subnets frozen>>, > shared_secret text, > shared_secret_encrypted boolean, > encryption_protocol text, > mtu int, > enabled boolean, > operational boolean, > error_details text, > vpn_peer frozen > ); > CREATE TYPE core.edge_ipsec_vpn_subnet ( > name text, > gateway text, > netmask text > ); > CREATE TYPE core.edge_ipsec_vpn_peer ( > type text, > id text, > name text, > vcd_url text, > vcd_org text, > vcd_username text > ); > {code} > sstabledump extract (IP addressees hidden as well as secrets) > {code} > [...] > { > "partition" : { > "key" : [ "84d567cc-0165-4e64-ab97-3a9d06370ba9" ], > "position" : 131146 > }, > "rows" : [ > { > "type" : "row", > "position" : 131236, > "liveness_info" : { "tstamp" : "2016-05-06T17:07:15.416003Z" }, > "cells" : [ > { "name" : "enabled", "value" : "true" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third > party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "" } > ] > }, > { > "type" : "row", > "position" : 131597, > "cells" : [ > { "name" : "endpoints", "path" : [ “XXX” ], "value" : "", "tstamp" > : "2016-03-29T08:05:38.297015Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:” ], "value" : "", "tstamp" : > "2016-03-29T08:05:38.297015Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:false::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-14T18:05:07.262001Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-29T08:05:38.297015Z" } > ] > }, > { > "type" : "row", > "position" : 133644, > "cells" : [ > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-29T07:05:27.213013Z" }, > { "name" : "tunnels", "path" : [ > “XXX::1.2.3.4.7:1.2.3.4:1.2.3.4:1.2.3.4:XXX:XXX:false:AES256:1500:true:true::third > party\\:1.2.3.4\\:\\:\\:\\:" ], "value" : "", "tstamp" : > "2016-03-29T07:05:
[jira] [Reopened] (CASSANDRA-8675) COPY TO/FROM broken for newline characters
[ https://issues.apache.org/jira/browse/CASSANDRA-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhaoYang reopened CASSANDRA-8675: - Hi.. this problem is causing some trouble when we migrating some tables in production. so I reopen the ticket as bug instead of the duplicated improvement ticket(https://issues.apache.org/jira/browse/CASSANDRA-8790). any suggestions are welcome.. > COPY TO/FROM broken for newline characters > -- > > Key: CASSANDRA-8675 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8675 > Project: Cassandra > Issue Type: Bug > Environment: [cqlsh 5.0.1 | Cassandra 2.1.2 | CQL spec 3.2.0 | Native > protocol v3] > Ubuntu 14.04 64-bit >Reporter: Lex Lythius > Labels: cqlsh > Fix For: 2.1.3 > > Attachments: copytest.csv > > > Exporting/importing does not preserve contents when texts containing newline > (and possibly other) characters are involved: > {code:sql} > cqlsh:test> create table if not exists copytest (id int primary key, t text); > cqlsh:test> insert into copytest (id, t) values (1, 'This has a newline > ... character'); > cqlsh:test> insert into copytest (id, t) values (2, 'This has a quote " > character'); > cqlsh:test> insert into copytest (id, t) values (3, 'This has a fake tab \t > character (typed backslash, t)'); > cqlsh:test> select * from copytest; > id | t > +- > 1 | This has a newline\ncharacter > 2 |This has a quote " character > 3 | This has a fake tab \t character (entered slash-t text) > (3 rows) > cqlsh:test> copy copytest to '/tmp/copytest.csv'; > 3 rows exported in 0.034 seconds. > cqlsh:test> copy copytest from '/tmp/copytest.csv'; > 3 rows imported in 0.005 seconds. > cqlsh:test> select * from copytest; > id | t > +--- > 1 | This has a newlinencharacter > 2 | This has a quote " character > 3 | This has a fake tab \t character (typed backslash, t) > (3 rows) > {code} > I tried replacing \n in the CSV file with \\n, which just expands to \n in > the table; and with an actual newline character, which fails with error since > it prematurely terminates the record. > It seems backslashes are only used to take the following character as a > literal > Until this is fixed, what would be the best way to refactor an old table with > a new, incompatible structure maintaining its content and name, since we > can't rename tables? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-13020) Stuck in LEAVING state (Transferring all hints to null)
[ https://issues.apache.org/jira/browse/CASSANDRA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15821557#comment-15821557 ] Stefan Podkowinski edited comment on CASSANDRA-13020 at 1/13/17 10:19 AM: -- The issues seems to be introduced in CASSANDRA-10198 and caused by not using the broadcast IP for retrieving the host ID from TokenMetadata. See [StorageService.getPreferredHintsStreamTarget()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L3972]. /cc [~krummas] was (Author: spo...@gmail.com): The issues seems to be caused by not using the broadcast IP for retrieving the host ID from TokenMetadata. See [StorageService.getPreferredHintsStreamTarget()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L3972]. /cc [~krummas] > Stuck in LEAVING state (Transferring all hints to null) > --- > > Key: CASSANDRA-13020 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13020 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: v3.0.9 >Reporter: Aleksandr Ivanov > Labels: decommission, hints > > I tried to decommission one node. > Node sent all data to another node and got stuck in LEAVING state. > Log message shows Exception in HintsDispatcher thread. > Could it be reason of stuck in LEAVING state? > command output: > {noformat} > root@cas-node6:~# time nodetool decommission > error: null > -- StackTrace -- > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106) > at > java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) > at > org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:203) > at > java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) > at > java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at > java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3566) > at > java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at > java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) > at > java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) > at > java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) > at > org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.transfer(HintsDispatchExecutor.java:168) > at > org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.run(HintsDispatchExecutor.java:141) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > real147m7.483s > user0m17.388s > sys 0m1.968s > {noformat} > nodetool netstats: > {noformat} > root@cas-node6:~# nodetool netstats > Mode: LEAVING > Not sending any streams. > Read Repair Statistics: > Attempted: 35082 > Mismatch (Blocking): 18 > Mismatch (Background): 0 > Pool NameActive Pending Completed Dropped > Large messages n/a 1 0 0 > Small messages n/a 0 16109860 112 > Gossip messages n/a 0 287074 0 > {noformat} > Log: > {noformat} > INFO [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:52:59,467 > StorageService.java:1170 - LEAVING: sleeping 3 ms for batch processing > and pending range setup > INFO [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,455 > StorageService.java:1170 - LEAVING: replaying batch log and streaming data to > other nodes > INFO [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,910 > StreamResultFuture.java:87 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] > Executing streaming plan for Unbootstrap > INFO [StreamConnectionEstablisher:1] 2016-12-07 12:53:39,911 > StreamSession.java:239 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] > Starting streaming to /10.10.10.17 > INFO [StreamConnectionEstablisher:2] 2016-12-07 12:53:39,911 > StreamSession.java:232 - [Stream #2cc8
[jira] [Commented] (CASSANDRA-13020) Stuck in LEAVING state (Transferring all hints to null)
[ https://issues.apache.org/jira/browse/CASSANDRA-13020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15821557#comment-15821557 ] Stefan Podkowinski commented on CASSANDRA-13020: The issues seems to be caused by not using the broadcast IP for retrieving the host ID from TokenMetadata. See [StorageService.getPreferredHintsStreamTarget()|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StorageService.java#L3972]. /cc [~krummas] > Stuck in LEAVING state (Transferring all hints to null) > --- > > Key: CASSANDRA-13020 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13020 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: v3.0.9 >Reporter: Aleksandr Ivanov > Labels: decommission, hints > > I tried to decommission one node. > Node sent all data to another node and got stuck in LEAVING state. > Log message shows Exception in HintsDispatcher thread. > Could it be reason of stuck in LEAVING state? > command output: > {noformat} > root@cas-node6:~# time nodetool decommission > error: null > -- StackTrace -- > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106) > at > java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) > at > org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:203) > at > java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184) > at > java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) > at > java.util.concurrent.ConcurrentHashMap$ValueSpliterator.forEachRemaining(ConcurrentHashMap.java:3566) > at > java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) > at > java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151) > at > java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174) > at > java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418) > at > org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.transfer(HintsDispatchExecutor.java:168) > at > org.apache.cassandra.hints.HintsDispatchExecutor$TransferHintsTask.run(HintsDispatchExecutor.java:141) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > real147m7.483s > user0m17.388s > sys 0m1.968s > {noformat} > nodetool netstats: > {noformat} > root@cas-node6:~# nodetool netstats > Mode: LEAVING > Not sending any streams. > Read Repair Statistics: > Attempted: 35082 > Mismatch (Blocking): 18 > Mismatch (Background): 0 > Pool NameActive Pending Completed Dropped > Large messages n/a 1 0 0 > Small messages n/a 0 16109860 112 > Gossip messages n/a 0 287074 0 > {noformat} > Log: > {noformat} > INFO [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:52:59,467 > StorageService.java:1170 - LEAVING: sleeping 3 ms for batch processing > and pending range setup > INFO [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,455 > StorageService.java:1170 - LEAVING: replaying batch log and streaming data to > other nodes > INFO [RMI TCP Connection(58)-127.0.0.1] 2016-12-07 12:53:39,910 > StreamResultFuture.java:87 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] > Executing streaming plan for Unbootstrap > INFO [StreamConnectionEstablisher:1] 2016-12-07 12:53:39,911 > StreamSession.java:239 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] > Starting streaming to /10.10.10.17 > INFO [StreamConnectionEstablisher:2] 2016-12-07 12:53:39,911 > StreamSession.java:232 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] > Session does not have any tasks. > INFO [StreamConnectionEstablisher:3] 2016-12-07 12:53:39,912 > StreamSession.java:232 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] > Session does not have any tasks. > INFO [StreamConnectionEstablisher:4] 2016-12-07 12:53:39,912 > StreamSession.java:232 - [Stream #2cc874c0-bc7c-11e6-b0df-e7f1ecd3dcfb] > Session does not have