[jira] [Comment Edited] (CASSANDRA-13313) Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939531#comment-15939531 ] Stefania edited comment on CASSANDRA-13313 at 3/24/17 1:26 AM: --- > My expectation was the only consistency issue is counters Counters created before 2.1, after 2.1 it is no longer a concern. > there are other resource issues (disk space, memory on reads if there are a > lot of tombstones, etc). Correct, during the 7066 discussions, I remember this was mentioned, that at worst it would be a resource issue and only affecting users who would attempt to upgrade without a clean shutdown. > I'm not sure how vital it is - I marked it minor and it's low on my queue, > but if consensus is it's truly a won't-fix I'm not sure I'll fight that. CASSANDRA-12212 was a won't fix mostly because the benefit was very low. However, if there is a patch, I am not at all opposed to it. was (Author: stefania): > My expectation was the only consistency issue is counters Counters created before 2.1, after 2.1 it is no longer a concern. > there are other resource issues (disk space, memory on reads if there are a > lot of tombstones, etc). Correct, during the 7066 discussions, I remember this was mentioned, that at worst it would be a resource issue and only affecting users who would attempt to upgrade without a clean shutdown. > I'm not sure how vital it is - I marked it minor and it's low on my queue, > but if consensus is it's truly a won't-fix I'm not sure I'll fight that. CASSANDRA-12212 was a won't fixed mostly because the benefit was very low. However, if there is a patch, I am not at all opposed to it. > Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0 > -- > > Key: CASSANDRA-13313 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13313 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Jeff Jirsa >Assignee: Jeff Jirsa >Priority: Minor > > Before 3.0 we used sstable ancestors to figure out if an sstable was left > over after a compaction. In 3.0 the ancestors are ignored and instead we use > LogTransaction files to figure it out. 3.0 should still clean up 2.1/2.2 > compaction leftovers using the on-disk sstable ancestors when available. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13313) Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939531#comment-15939531 ] Stefania commented on CASSANDRA-13313: -- > My expectation was the only consistency issue is counters Counters created before 2.1, after 2.1 it is no longer a concern. > there are other resource issues (disk space, memory on reads if there are a > lot of tombstones, etc). Correct, during the 7066 discussions, I remember this was mentioned, that at worst it would be a resource issue and only affecting users who would attempt to upgrade without a clean shutdown. > I'm not sure how vital it is - I marked it minor and it's low on my queue, > but if consensus is it's truly a won't-fix I'm not sure I'll fight that. CASSANDRA-12212 was a won't fixed mostly because the benefit was very low. However, if there is a patch, I am not at all opposed to it. > Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0 > -- > > Key: CASSANDRA-13313 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13313 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Jeff Jirsa >Assignee: Jeff Jirsa >Priority: Minor > > Before 3.0 we used sstable ancestors to figure out if an sstable was left > over after a compaction. In 3.0 the ancestors are ignored and instead we use > LogTransaction files to figure it out. 3.0 should still clean up 2.1/2.2 > compaction leftovers using the on-disk sstable ancestors when available. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13313) Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939519#comment-15939519 ] Jeff Jirsa commented on CASSANDRA-13313: Oh I hadn't seen 12212 I have a patch where it's implemented with {{compactions_in_progress}} and re-added (but deprecated) ancestors to metadata , but hadn't finished the dtest. My expectation was the only consistency issue is counters but there are other resource issues (disk space, memory on reads if there are a lot of tombstones, etc). I'm not sure how vital it is - I marked it minor and it's low on my queue, but if consensus is it's truly a won't-fix I'm not sure I'll fight that. > Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0 > -- > > Key: CASSANDRA-13313 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13313 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Jeff Jirsa >Assignee: Jeff Jirsa >Priority: Minor > > Before 3.0 we used sstable ancestors to figure out if an sstable was left > over after a compaction. In 3.0 the ancestors are ignored and instead we use > LogTransaction files to figure it out. 3.0 should still clean up 2.1/2.2 > compaction leftovers using the on-disk sstable ancestors when available. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13374) Minor doc update: Replaced non-ASCII dash in command line
[ https://issues.apache.org/jira/browse/CASSANDRA-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-13374: --- Fix Version/s: 3.11.x Status: Patch Available (was: Open) > Minor doc update: Replaced non-ASCII dash in command line > - > > Key: CASSANDRA-13374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13374 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > Fix For: 3.11.x > > Attachments: 13374-3.11.patch > > > Minor doc update to replace non-ascii code, for copy-paste. > Not sure if it's the right way to report it, or should I use GitHub PR? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13374) Minor doc update: Replaced non-ASCII dash in command line
[ https://issues.apache.org/jira/browse/CASSANDRA-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-13374: --- Attachment: 13374-3.11.patch > Minor doc update: Replaced non-ASCII dash in command line > - > > Key: CASSANDRA-13374 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13374 > Project: Cassandra > Issue Type: Bug >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Trivial > Attachments: 13374-3.11.patch > > > Minor doc update to replace non-ascii code, for copy-paste. > Not sure if it's the right way to report it, or should I use GitHub PR? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (CASSANDRA-13374) Minor doc update: Replaced non-ASCII dash in command line
Jay Zhuang created CASSANDRA-13374: -- Summary: Minor doc update: Replaced non-ASCII dash in command line Key: CASSANDRA-13374 URL: https://issues.apache.org/jira/browse/CASSANDRA-13374 Project: Cassandra Issue Type: Bug Reporter: Jay Zhuang Assignee: Jay Zhuang Priority: Trivial Minor doc update to replace non-ascii code, for copy-paste. Not sure if it's the right way to report it, or should I use GitHub PR? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13313) Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939503#comment-15939503 ] Stefania commented on CASSANDRA-13313: -- There was a similar discussion in CASSANDRA-12212 - albeit focused on using {{compactions_in_progress}} rather than ancestors. Other than counters created before 2.1, are you aware of any other possible consistency issues? > Compaction leftovers not removed on upgrade 2.1/2.2 -> 3.0 > -- > > Key: CASSANDRA-13313 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13313 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Jeff Jirsa >Assignee: Jeff Jirsa >Priority: Minor > > Before 3.0 we used sstable ancestors to figure out if an sstable was left > over after a compaction. In 3.0 the ancestors are ignored and instead we use > LogTransaction files to figure it out. 3.0 should still clean up 2.1/2.2 > compaction leftovers using the on-disk sstable ancestors when available. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13373) Provide additional speculative retry statistics
[ https://issues.apache.org/jira/browse/CASSANDRA-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939432#comment-15939432 ] Ariel Weisberg commented on CASSANDRA-13373: ||Code|utests|dtests|| |[trunk|https://github.com/apache/cassandra/compare/trunk...aweisberg:cassandra-13373-trunk?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13373-trunk-testall/1/]|[dtests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13373-trunk-dtest/1/]| > Provide additional speculative retry statistics > --- > > Key: CASSANDRA-13373 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13373 > Project: Cassandra > Issue Type: Improvement > Components: Observability >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > Fix For: 4.x > > > Right now there is a single metric for speculative retry on reads that is the > number of speculative retries attempted. You can't tell how many of those > actually succeeded in salvaging the read. > The metric is also per table and there is no keyspace level rollup as there > is for several other metrics. > Add a metric that counts reads that attempt to speculate but fail to complete > before the timeout (ignoring read errors). > Add a rollup metric for the current count of speculation attempts as well as > the count of failed speculations. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (CASSANDRA-13373) Provide additional speculative retry statistics
Ariel Weisberg created CASSANDRA-13373: -- Summary: Provide additional speculative retry statistics Key: CASSANDRA-13373 URL: https://issues.apache.org/jira/browse/CASSANDRA-13373 Project: Cassandra Issue Type: Improvement Components: Observability Reporter: Ariel Weisberg Assignee: Ariel Weisberg Fix For: 4.x Right now there is a single metric for speculative retry on reads that is the number of speculative retries attempted. You can't tell how many of those actually succeeded in salvaging the read. The metric is also per table and there is no keyspace level rollup as there is for several other metrics. Add a metric that counts reads that attempt to speculate but fail to complete before the timeout (ignoring read errors). Add a rollup metric for the current count of speculation attempts as well as the count of failed speculations. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13324: --- Status: Patch Available (was: Open) > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13324) Outbound TCP connections ignore internode authenticator
[ https://issues.apache.org/jira/browse/CASSANDRA-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939297#comment-15939297 ] Ariel Weisberg commented on CASSANDRA-13324: [~krummas] can you review the last set of changes? > Outbound TCP connections ignore internode authenticator > --- > > Key: CASSANDRA-13324 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13324 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging >Reporter: Ariel Weisberg >Assignee: Ariel Weisberg > > When creating an outbound connection pool and connecting from within > andOutboundTcpConnection it doesn't check if internode authenticator will > allow the connection. In practice this can cause a bunch of orphaned threads > perpetually attempting to reconnect to an endpoint that is will never accept > the connection. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-13113: Reviewer: Sam Tunnicliffe Status: Open (was: Patch Available) > test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test > --- > > Key: CASSANDRA-13113 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13113 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest, test-failure > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/trunk_dtest/1466/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 358, in run > self.tearDown() > File "/home/automaton/cassandra-dtest/dtest.py", line 582, in tearDown > raise AssertionError('Unexpected error in log, see stdout') > {code}{code} > Standard Output > Unexpected error in node2 log, error: > ERROR [Native-Transport-Requests-1] 2017-01-08 21:10:55,056 Message.java:623 > - Unexpected exception during request; channel = [id: 0xf39c6dae, > L:/127.0.0.2:9042 - R:/127.0.0.1:43640] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.UnavailableException: Cannot achieve > consistency level QUORUM > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310) > ~[main/:na] > at org.apache.cassandra.service.ClientState.login(ClientState.java:271) > ~[main/:na] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80) > ~[main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517) > [main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) > [main/:na] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_45] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot > achieve consistency level QUORUM > at > org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334) > ~[main/:na] > at > org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1734) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1696) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1642) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1557) > ~[main/:na] > at > org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:964) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:282) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:252) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:511) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493) > ~[main/:na] > ... 13 common frames omitted > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-13113: Status: Patch Available (was: Open) > test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test > --- > > Key: CASSANDRA-13113 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13113 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest, test-failure > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/trunk_dtest/1466/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 358, in run > self.tearDown() > File "/home/automaton/cassandra-dtest/dtest.py", line 582, in tearDown > raise AssertionError('Unexpected error in log, see stdout') > {code}{code} > Standard Output > Unexpected error in node2 log, error: > ERROR [Native-Transport-Requests-1] 2017-01-08 21:10:55,056 Message.java:623 > - Unexpected exception during request; channel = [id: 0xf39c6dae, > L:/127.0.0.2:9042 - R:/127.0.0.1:43640] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.UnavailableException: Cannot achieve > consistency level QUORUM > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310) > ~[main/:na] > at org.apache.cassandra.service.ClientState.login(ClientState.java:271) > ~[main/:na] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80) > ~[main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517) > [main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) > [main/:na] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_45] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot > achieve consistency level QUORUM > at > org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334) > ~[main/:na] > at > org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1734) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1696) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1642) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1557) > ~[main/:na] > at > org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:964) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:282) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:252) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:511) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493) > ~[main/:na] > ... 13 common frames omitted > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939174#comment-15939174 ] Alex Petrov commented on CASSANDRA-13113: - As [~beobal] noted, we do not need wrapping exceptions at all now, and can get rid of {{ExecutionException}} in {{AuthCache#get}}. I've changed the patch according to his suggestions. > test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test > --- > > Key: CASSANDRA-13113 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13113 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest, test-failure > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/trunk_dtest/1466/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 358, in run > self.tearDown() > File "/home/automaton/cassandra-dtest/dtest.py", line 582, in tearDown > raise AssertionError('Unexpected error in log, see stdout') > {code}{code} > Standard Output > Unexpected error in node2 log, error: > ERROR [Native-Transport-Requests-1] 2017-01-08 21:10:55,056 Message.java:623 > - Unexpected exception during request; channel = [id: 0xf39c6dae, > L:/127.0.0.2:9042 - R:/127.0.0.1:43640] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.UnavailableException: Cannot achieve > consistency level QUORUM > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310) > ~[main/:na] > at org.apache.cassandra.service.ClientState.login(ClientState.java:271) > ~[main/:na] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80) > ~[main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517) > [main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) > [main/:na] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_45] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot > achieve consistency level QUORUM > at > org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334) > ~[main/:na] > at > org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1734) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1696) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1642) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1557) > ~[main/:na] > at > org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:964) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:282) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:252) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:511) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493) > ~[main/:na] > ... 13 common frames omitted > {code} -- This m
[jira] [Assigned] (CASSANDRA-13229) dtest failure in topology_test.TestTopology.size_estimates_multidc_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov reassigned CASSANDRA-13229: --- Assignee: Alex Petrov > dtest failure in topology_test.TestTopology.size_estimates_multidc_test > --- > > Key: CASSANDRA-13229 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13229 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest, test-failure > Fix For: 4.0 > > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/508/testReport/topology_test/TestTopology/size_estimates_multidc_test > {code} > Standard Output > Unexpected error in node1 log, error: > ERROR [MemtablePostFlush:1] 2017-02-15 16:07:33,837 CassandraDaemon.java:211 > - Exception in thread Thread[MemtablePostFlush:1,5,main] > java.lang.IndexOutOfBoundsException: Index: 3, Size: 3 > at java.util.ArrayList.rangeCheck(ArrayList.java:653) ~[na:1.8.0_45] > at java.util.ArrayList.get(ArrayList.java:429) ~[na:1.8.0_45] > at > org.apache.cassandra.dht.Splitter.splitOwnedRangesNoPartialRanges(Splitter.java:92) > ~[main/:na] > at org.apache.cassandra.dht.Splitter.splitOwnedRanges(Splitter.java:59) > ~[main/:na] > at > org.apache.cassandra.service.StorageService.getDiskBoundaries(StorageService.java:5180) > ~[main/:na] > at > org.apache.cassandra.db.Memtable.createFlushRunnables(Memtable.java:312) > ~[main/:na] > at org.apache.cassandra.db.Memtable.flushRunnables(Memtable.java:304) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore$Flush.flushMemtable(ColumnFamilyStore.java:1150) > ~[main/:na] > at > org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1115) > ~[main/:na] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_45] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_45] > at > org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$290(NamedThreadFactory.java:81) > [main/:na] > at > org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1321203216.run(Unknown > Source) [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > Unexpected error in node1 log, error: > ERROR [MigrationStage:1] 2017-02-15 16:07:33,853 CassandraDaemon.java:211 - > Exception in thread Thread[MigrationStage:1,5,main] > java.lang.RuntimeException: java.util.concurrent.ExecutionException: > java.lang.IndexOutOfBoundsException: Index: 3, Size: 3 > at > org.apache.cassandra.utils.FBUtilities.waitOnFuture(FBUtilities.java:401) > ~[main/:na] > at > org.apache.cassandra.schema.SchemaKeyspace.lambda$flush$496(SchemaKeyspace.java:284) > ~[main/:na] > at > org.apache.cassandra.schema.SchemaKeyspace$$Lambda$222/1949434065.accept(Unknown > Source) ~[na:na] > at java.lang.Iterable.forEach(Iterable.java:75) ~[na:1.8.0_45] > at > org.apache.cassandra.schema.SchemaKeyspace.flush(SchemaKeyspace.java:284) > ~[main/:na] > at > org.apache.cassandra.schema.SchemaKeyspace.applyChanges(SchemaKeyspace.java:1265) > ~[main/:na] > at org.apache.cassandra.schema.Schema.merge(Schema.java:577) ~[main/:na] > at > org.apache.cassandra.schema.Schema.mergeAndAnnounceVersion(Schema.java:564) > ~[main/:na] > at > org.apache.cassandra.schema.MigrationManager$1.runMayThrow(MigrationManager.java:402) > ~[main/:na] > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_45] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > ~[na:1.8.0_45] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > ~[na:1.8.0_45] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_45] > at > org.apache.cassandra.concurrent.NamedThreadFactory.lambda$threadLocalDeallocator$290(NamedThreadFactory.java:81) > [main/:na] > at > org.apache.cassandra.concurrent.NamedThreadFactory$$Lambda$5/1321203216.run(Unknown > Source) [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > Caused by: java.util.concurrent.ExecutionException: > java.lang.IndexOutOfBoundsException: Index: 3, Size: 3 > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > ~[na
[jira] [Updated] (CASSANDRA-12773) cassandra-stress error for one way SSL
[ https://issues.apache.org/jira/browse/CASSANDRA-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefan Podkowinski updated CASSANDRA-12773: --- Resolution: Fixed Fix Version/s: (was: 2.2.x) 4.0 3.11.0 3.0.13 2.2.10 Status: Resolved (was: Ready to Commit) Committed as 5978f9d5f719455ceb79d5f077cdd1b72b4e1876 Thanks! > cassandra-stress error for one way SSL > --- > > Key: CASSANDRA-12773 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12773 > Project: Cassandra > Issue Type: Bug > Components: Tools >Reporter: Jane Deng >Assignee: Stefan Podkowinski > Fix For: 2.2.10, 3.0.13, 3.11.0, 4.0 > > Attachments: 12773-2.2.patch > > > CASSANDRA-9325 added keystore/truststore configuration into cassandra-stress. > However, for one way ssl (require_client_auth=false), there is no need to > pass keystore info into ssloptions. Cassadra-stress errored out: > {noformat} > java.lang.RuntimeException: java.io.IOException: Error creating the > initializing the SSL Context > at > org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:200) > > at > org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79) > > at > org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69) > > at > org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:207) > > at org.apache.cassandra.stress.StressAction.run(StressAction.java:55) > at org.apache.cassandra.stress.Stress.main(Stress.java:117) > Caused by: java.io.IOException: Error creating the initializing the SSL > Context > at > org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:151) > > at > org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:128) > > at > org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:191) > > ... 5 more > Caused by: java.io.IOException: Keystore was tampered with, or password was > incorrect > at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:772) > at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:55) > at java.security.KeyStore.load(KeyStore.java:1445) > at > org.apache.cassandra.security.SSLFactory.createSSLContext(SSLFactory.java:129) > > ... 7 more > Caused by: java.security.UnrecoverableKeyException: Password verification > failed > at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:770) > ... 10 more > {noformat} > It's a bug from CASSANDRA-9325. When the keystore is absent, the keystore is > assigned to the path of the truststore, but the password isn't taken care. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[01/11] cassandra git commit: Bugs handling range tombstones in the sstable iterators
Repository: cassandra Updated Branches: refs/heads/cassandra-2.2 bf0906b92 -> 5978f9d5f refs/heads/cassandra-3.0 f53e502c3 -> 631162271 refs/heads/cassandra-3.11 82d3cdcd6 -> a10b8079e refs/heads/trunk 18c6ed25e -> 3048608c6 Bugs handling range tombstones in the sstable iterators patch by Sylvain Lebresne; reviewed by Branimir Lambov for CASSANDRA-13340 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f53e502c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f53e502c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f53e502c Branch: refs/heads/cassandra-3.11 Commit: f53e502c3c484481a296d9fdbff5fde4b709a9fc Parents: 2836a64 Author: Sylvain Lebresne Authored: Thu Mar 16 17:05:15 2017 +0100 Committer: Sylvain Lebresne Committed: Thu Mar 23 17:04:07 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/db/ClusteringPrefix.java | 2 +- .../cassandra/db/UnfilteredDeserializer.java| 1 - .../db/columniterator/SSTableIterator.java | 11 +- .../columniterator/SSTableReversedIterator.java | 124 +++ .../cql3/validation/operations/DeleteTest.java | 70 +++ 6 files changed, 180 insertions(+), 29 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f53e502c/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 9140c73..4ee5814 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.0.13 + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340) * Fix CONTAINS filtering for null collections (CASSANDRA-13246) * Applying: Use a unique metric reservoir per test run when using Cassandra-wide metrics residing in MBeans (CASSANDRA-13216) * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f53e502c/src/java/org/apache/cassandra/db/ClusteringPrefix.java -- diff --git a/src/java/org/apache/cassandra/db/ClusteringPrefix.java b/src/java/org/apache/cassandra/db/ClusteringPrefix.java index 7f7f964..3b826c9 100644 --- a/src/java/org/apache/cassandra/db/ClusteringPrefix.java +++ b/src/java/org/apache/cassandra/db/ClusteringPrefix.java @@ -451,7 +451,7 @@ public interface ClusteringPrefix extends IMeasurableMemory, Clusterable } if (bound.size() == nextSize) -return nextKind.compareTo(bound.kind()); +return Kind.compare(nextKind, bound.kind()); // We know that we'll have exited already if nextSize < bound.size return -bound.kind().comparedToClustering; http://git-wip-us.apache.org/repos/asf/cassandra/blob/f53e502c/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java -- diff --git a/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java b/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java index 42a806a..7bbbfdb 100644 --- a/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java +++ b/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java @@ -694,6 +694,5 @@ public abstract class UnfilteredDeserializer } } } - } } http://git-wip-us.apache.org/repos/asf/cassandra/blob/f53e502c/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java -- diff --git a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java index 0409310..9bcca48 100644 --- a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java +++ b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java @@ -123,7 +123,14 @@ public class SSTableIterator extends AbstractSSTableIterator { assert deserializer != null; -if (!deserializer.hasNext() || deserializer.compareNextTo(end) > 0) +// We use a same reasoning as in handlePreSliceData regarding the strictness of the inequality below. +// We want to exclude deserialized unfiltered equal to end, because 1) we won't miss any rows since those +// woudn't be equal to a slice bound and 2) a end bound can be equal to a start bound +// (EXCL_END(x) == INCL_START(x) for instance) and in that case we don't want to return start bound because +// it's fundamentally excluded. And if the bound is a end (for a range tombstone), it means it's exactly +// our slice end, but in that case we will properly close the range tombstone
[11/11] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3048608c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3048608c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3048608c Branch: refs/heads/trunk Commit: 3048608c6099fd5c3bcc9bd72d3265307283bc41 Parents: 18c6ed2 a10b807 Author: Stefan Podkowinski Authored: Thu Mar 23 20:56:22 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:57:10 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3048608c/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3048608c/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java --
[04/11] cassandra git commit: Honor truststore-password parameter in stress
Honor truststore-password parameter in stress patch by Jane Deng; reviewed by Robert Stupp for CASSANDRA-12773 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5978f9d5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5978f9d5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5978f9d5 Branch: refs/heads/cassandra-3.11 Commit: 5978f9d5f719455ceb79d5f077cdd1b72b4e1876 Parents: bf0906b Author: Stefan Podkowinski Authored: Thu Mar 23 20:48:03 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:48:03 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index df2421d..a415395 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.10 + * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) * Discard in-flight shadow round responses (CASSANDRA-12653) * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) * Wrong logger name in AnticompactionTask (CASSANDRA-13343) http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java index b6d1d90..a253c07 100644 --- a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java +++ b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java @@ -115,6 +115,7 @@ public class SettingsTransport implements Serializable { // mandatory for SSLFactory.createSSLContext(), see CASSANDRA-9325 encOptions.keystore = encOptions.truststore; +encOptions.keystore_password = encOptions.truststore_password; } encOptions.algorithm = options.alg.value(); encOptions.protocol = options.protocol.value();
[09/11] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a10b8079 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a10b8079 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a10b8079 Branch: refs/heads/trunk Commit: a10b8079ef713d2ee59fb4af27f65c148d68d900 Parents: 82d3cdc 6311622 Author: Stefan Podkowinski Authored: Thu Mar 23 20:51:17 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:52:05 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a10b8079/CHANGES.txt -- diff --cc CHANGES.txt index 6644796,2c5573a..8b13109 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -41,144 -51,6 +41,145 @@@ Merged from 3.0 live rows in sstabledump (CASSANDRA-13177) * Provide user workaround when system_schema.columns does not contain entries for a table that's in system_schema.tables (CASSANDRA-13180) +Merged from 2.2: ++ * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) + * Discard in-flight shadow round responses (CASSANDRA-12653) + * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) + * Wrong logger name in AnticompactionTask (CASSANDRA-13343) + * Commitlog replay may fail if last mutation is within 4 bytes of end of segment (CASSANDRA-13282) + * Fix queries updating multiple time the same list (CASSANDRA-13130) + * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053) + * Fix flaky LongLeveledCompactionStrategyTest (CASSANDRA-12202) + * Fix failing COPY TO STDOUT (CASSANDRA-12497) + * Fix ColumnCounter::countAll behaviour for reverse queries (CASSANDRA-13222) + * Exceptions encountered calling getSeeds() breaks OTC thread (CASSANDRA-13018) + * Fix negative mean latency metric (CASSANDRA-12876) + * Use only one file pointer when creating commitlog segments (CASSANDRA-12539) +Merged from 2.1: + * Remove unused repositories (CASSANDRA-13278) + * Log stacktrace of uncaught exceptions (CASSANDRA-13108) + * Use portable stderr for java error in startup (CASSANDRA-13211) + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204) + * Coalescing strategy can enter infinite loop (CASSANDRA-13159) + + +3.10 + * Fix secondary index queries regression (CASSANDRA-13013) + * Add duration type to the protocol V5 (CASSANDRA-12850) + * Fix duration type validation (CASSANDRA-13143) + * Fix flaky GcCompactionTest (CASSANDRA-12664) + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058) + * Fixed query monitoring for range queries (CASSANDRA-13050) + * Remove outboundBindAny configuration property (CASSANDRA-12673) + * Use correct bounds for all-data range when filtering (CASSANDRA-12666) + * Remove timing window in test case (CASSANDRA-12875) + * Resolve unit testing without JCE security libraries installed (CASSANDRA-12945) + * Fix inconsistencies in cassandra-stress load balancing policy (CASSANDRA-12919) + * Fix validation of non-frozen UDT cells (CASSANDRA-12916) + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903) + * Fix Murmur3PartitionerTest (CASSANDRA-12858) + * Move cqlsh syntax rules into separate module and allow easier customization (CASSANDRA-12897) + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283) + * Fix cassandra-stress truncate option (CASSANDRA-12695) + * Fix crossNode value when receiving messages (CASSANDRA-12791) + * Don't load MX4J beans twice (CASSANDRA-12869) + * Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum (CASSANDRA-12838) + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836) + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845) + * Properly format IPv6 addresses when logging JMX service URL (CASSANDRA-12454) + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777) + * Use non-token restrictions for bounds when token restrictions are overridden (CASSANDRA-12419) + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803) + * Use different build directories for Eclipse and Ant (CASSANDRA-12466) + * Avoid potential AttributeError in cqlsh due to no table metadata (CASSANDRA-12815) + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster (CASSANDRA-12812) + * Upgrade commons-codec to 1.9 (CASSANDRA-12790) + * Make the fanout size for LeveledCompactionStrategy to be configurable (CASSANDRA-11550) + * Add duration data type (CASSANDRA-11873) + *
[05/11] cassandra git commit: Honor truststore-password parameter in stress
Honor truststore-password parameter in stress patch by Jane Deng; reviewed by Robert Stupp for CASSANDRA-12773 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5978f9d5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5978f9d5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5978f9d5 Branch: refs/heads/trunk Commit: 5978f9d5f719455ceb79d5f077cdd1b72b4e1876 Parents: bf0906b Author: Stefan Podkowinski Authored: Thu Mar 23 20:48:03 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:48:03 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index df2421d..a415395 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.10 + * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) * Discard in-flight shadow round responses (CASSANDRA-12653) * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) * Wrong logger name in AnticompactionTask (CASSANDRA-13343) http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java index b6d1d90..a253c07 100644 --- a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java +++ b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java @@ -115,6 +115,7 @@ public class SettingsTransport implements Serializable { // mandatory for SSLFactory.createSSLContext(), see CASSANDRA-9325 encOptions.keystore = encOptions.truststore; +encOptions.keystore_password = encOptions.truststore_password; } encOptions.algorithm = options.alg.value(); encOptions.protocol = options.protocol.value();
[02/11] cassandra git commit: Honor truststore-password parameter in stress
Honor truststore-password parameter in stress patch by Jane Deng; reviewed by Robert Stupp for CASSANDRA-12773 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5978f9d5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5978f9d5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5978f9d5 Branch: refs/heads/cassandra-2.2 Commit: 5978f9d5f719455ceb79d5f077cdd1b72b4e1876 Parents: bf0906b Author: Stefan Podkowinski Authored: Thu Mar 23 20:48:03 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:48:03 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index df2421d..a415395 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.10 + * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) * Discard in-flight shadow round responses (CASSANDRA-12653) * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) * Wrong logger name in AnticompactionTask (CASSANDRA-13343) http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java index b6d1d90..a253c07 100644 --- a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java +++ b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java @@ -115,6 +115,7 @@ public class SettingsTransport implements Serializable { // mandatory for SSLFactory.createSSLContext(), see CASSANDRA-9325 encOptions.keystore = encOptions.truststore; +encOptions.keystore_password = encOptions.truststore_password; } encOptions.algorithm = options.alg.value(); encOptions.protocol = options.protocol.value();
[07/11] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/63116227 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/63116227 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/63116227 Branch: refs/heads/trunk Commit: 631162271c9bbaca6b48dc4e2223dbba97bf51d4 Parents: f53e502 5978f9d Author: Stefan Podkowinski Authored: Thu Mar 23 20:49:05 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:50:13 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/63116227/CHANGES.txt -- diff --cc CHANGES.txt index 4ee5814,a415395..2c5573a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,11 -1,5 +1,12 @@@ -2.2.10 +3.0.13 + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340) + * Fix CONTAINS filtering for null collections (CASSANDRA-13246) + * Applying: Use a unique metric reservoir per test run when using Cassandra-wide metrics residing in MBeans (CASSANDRA-13216) + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320) + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305) + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238) +Merged from 2.2: + * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) * Discard in-flight shadow round responses (CASSANDRA-12653) * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
[06/11] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/63116227 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/63116227 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/63116227 Branch: refs/heads/cassandra-3.11 Commit: 631162271c9bbaca6b48dc4e2223dbba97bf51d4 Parents: f53e502 5978f9d Author: Stefan Podkowinski Authored: Thu Mar 23 20:49:05 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:50:13 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/63116227/CHANGES.txt -- diff --cc CHANGES.txt index 4ee5814,a415395..2c5573a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,11 -1,5 +1,12 @@@ -2.2.10 +3.0.13 + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340) + * Fix CONTAINS filtering for null collections (CASSANDRA-13246) + * Applying: Use a unique metric reservoir per test run when using Cassandra-wide metrics residing in MBeans (CASSANDRA-13216) + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320) + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305) + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238) +Merged from 2.2: + * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) * Discard in-flight shadow round responses (CASSANDRA-12653) * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
[03/11] cassandra git commit: Honor truststore-password parameter in stress
Honor truststore-password parameter in stress patch by Jane Deng; reviewed by Robert Stupp for CASSANDRA-12773 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5978f9d5 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5978f9d5 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5978f9d5 Branch: refs/heads/cassandra-3.0 Commit: 5978f9d5f719455ceb79d5f077cdd1b72b4e1876 Parents: bf0906b Author: Stefan Podkowinski Authored: Thu Mar 23 20:48:03 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:48:03 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index df2421d..a415395 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.2.10 + * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) * Discard in-flight shadow round responses (CASSANDRA-12653) * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) * Wrong logger name in AnticompactionTask (CASSANDRA-13343) http://git-wip-us.apache.org/repos/asf/cassandra/blob/5978f9d5/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java -- diff --git a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java index b6d1d90..a253c07 100644 --- a/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java +++ b/tools/stress/src/org/apache/cassandra/stress/settings/SettingsTransport.java @@ -115,6 +115,7 @@ public class SettingsTransport implements Serializable { // mandatory for SSLFactory.createSSLContext(), see CASSANDRA-9325 encOptions.keystore = encOptions.truststore; +encOptions.keystore_password = encOptions.truststore_password; } encOptions.algorithm = options.alg.value(); encOptions.protocol = options.protocol.value();
[10/11] cassandra git commit: Merge branch 'cassandra-3.0' into cassandra-3.11
Merge branch 'cassandra-3.0' into cassandra-3.11 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a10b8079 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a10b8079 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a10b8079 Branch: refs/heads/cassandra-3.11 Commit: a10b8079ef713d2ee59fb4af27f65c148d68d900 Parents: 82d3cdc 6311622 Author: Stefan Podkowinski Authored: Thu Mar 23 20:51:17 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:52:05 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a10b8079/CHANGES.txt -- diff --cc CHANGES.txt index 6644796,2c5573a..8b13109 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -41,144 -51,6 +41,145 @@@ Merged from 3.0 live rows in sstabledump (CASSANDRA-13177) * Provide user workaround when system_schema.columns does not contain entries for a table that's in system_schema.tables (CASSANDRA-13180) +Merged from 2.2: ++ * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) + * Discard in-flight shadow round responses (CASSANDRA-12653) + * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) + * Wrong logger name in AnticompactionTask (CASSANDRA-13343) + * Commitlog replay may fail if last mutation is within 4 bytes of end of segment (CASSANDRA-13282) + * Fix queries updating multiple time the same list (CASSANDRA-13130) + * Fix GRANT/REVOKE when keyspace isn't specified (CASSANDRA-13053) + * Fix flaky LongLeveledCompactionStrategyTest (CASSANDRA-12202) + * Fix failing COPY TO STDOUT (CASSANDRA-12497) + * Fix ColumnCounter::countAll behaviour for reverse queries (CASSANDRA-13222) + * Exceptions encountered calling getSeeds() breaks OTC thread (CASSANDRA-13018) + * Fix negative mean latency metric (CASSANDRA-12876) + * Use only one file pointer when creating commitlog segments (CASSANDRA-12539) +Merged from 2.1: + * Remove unused repositories (CASSANDRA-13278) + * Log stacktrace of uncaught exceptions (CASSANDRA-13108) + * Use portable stderr for java error in startup (CASSANDRA-13211) + * Fix Thread Leak in OutboundTcpConnection (CASSANDRA-13204) + * Coalescing strategy can enter infinite loop (CASSANDRA-13159) + + +3.10 + * Fix secondary index queries regression (CASSANDRA-13013) + * Add duration type to the protocol V5 (CASSANDRA-12850) + * Fix duration type validation (CASSANDRA-13143) + * Fix flaky GcCompactionTest (CASSANDRA-12664) + * Fix TestHintedHandoff.hintedhandoff_decom_test (CASSANDRA-13058) + * Fixed query monitoring for range queries (CASSANDRA-13050) + * Remove outboundBindAny configuration property (CASSANDRA-12673) + * Use correct bounds for all-data range when filtering (CASSANDRA-12666) + * Remove timing window in test case (CASSANDRA-12875) + * Resolve unit testing without JCE security libraries installed (CASSANDRA-12945) + * Fix inconsistencies in cassandra-stress load balancing policy (CASSANDRA-12919) + * Fix validation of non-frozen UDT cells (CASSANDRA-12916) + * Don't shut down socket input/output on StreamSession (CASSANDRA-12903) + * Fix Murmur3PartitionerTest (CASSANDRA-12858) + * Move cqlsh syntax rules into separate module and allow easier customization (CASSANDRA-12897) + * Fix CommitLogSegmentManagerTest (CASSANDRA-12283) + * Fix cassandra-stress truncate option (CASSANDRA-12695) + * Fix crossNode value when receiving messages (CASSANDRA-12791) + * Don't load MX4J beans twice (CASSANDRA-12869) + * Extend native protocol request flags, add versions to SUPPORTED, and introduce ProtocolVersion enum (CASSANDRA-12838) + * Set JOINING mode when running pre-join tasks (CASSANDRA-12836) + * remove net.mintern.primitive library due to license issue (CASSANDRA-12845) + * Properly format IPv6 addresses when logging JMX service URL (CASSANDRA-12454) + * Optimize the vnode allocation for single replica per DC (CASSANDRA-12777) + * Use non-token restrictions for bounds when token restrictions are overridden (CASSANDRA-12419) + * Fix CQLSH auto completion for PER PARTITION LIMIT (CASSANDRA-12803) + * Use different build directories for Eclipse and Ant (CASSANDRA-12466) + * Avoid potential AttributeError in cqlsh due to no table metadata (CASSANDRA-12815) + * Fix RandomReplicationAwareTokenAllocatorTest.testExistingCluster (CASSANDRA-12812) + * Upgrade commons-codec to 1.9 (CASSANDRA-12790) + * Make the fanout size for LeveledCompactionStrategy to be configurable (CASSANDRA-11550) + * Add duration data type (CASSANDRA-11
[08/11] cassandra git commit: Merge branch 'cassandra-2.2' into cassandra-3.0
Merge branch 'cassandra-2.2' into cassandra-3.0 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/63116227 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/63116227 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/63116227 Branch: refs/heads/cassandra-3.0 Commit: 631162271c9bbaca6b48dc4e2223dbba97bf51d4 Parents: f53e502 5978f9d Author: Stefan Podkowinski Authored: Thu Mar 23 20:49:05 2017 +0100 Committer: Stefan Podkowinski Committed: Thu Mar 23 20:50:13 2017 +0100 -- CHANGES.txt | 1 + .../src/org/apache/cassandra/stress/settings/SettingsTransport.java | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/63116227/CHANGES.txt -- diff --cc CHANGES.txt index 4ee5814,a415395..2c5573a --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,11 -1,5 +1,12 @@@ -2.2.10 +3.0.13 + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340) + * Fix CONTAINS filtering for null collections (CASSANDRA-13246) + * Applying: Use a unique metric reservoir per test run when using Cassandra-wide metrics residing in MBeans (CASSANDRA-13216) + * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320) + * Slice.isEmpty() returns false for some empty slices (CASSANDRA-13305) + * Add formatted row output to assertEmpty in CQL Tester (CASSANDRA-13238) +Merged from 2.2: + * Honor truststore-password parameter in cassandra-stress (CASSANDRA-12773) * Discard in-flight shadow round responses (CASSANDRA-12653) * Don't anti-compact repaired data to avoid inconsistencies (CASSANDRA-13153) * Wrong logger name in AnticompactionTask (CASSANDRA-13343)
[jira] [Commented] (CASSANDRA-13354) LCS estimated compaction tasks does not take number of files into account
[ https://issues.apache.org/jira/browse/CASSANDRA-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939107#comment-15939107 ] Jan Karlsson commented on CASSANDRA-13354: -- I did some tests simulating traffic on a 4 node cluster. 2 of the nodes were running with my patch while the other two ran without it. Steps to reproduce: Traffic on Turn one of the nodes off Wait 7 minutes Truncate hints on all other nodes Turn node on Run repair on the node As you can see the unpatched version kept increasing as non-repaired data from ongoing traffic was prioritized. If I had more discrepancies in my data set, this would just increase to the configured FD limit or until you die from heap pressure. Repair is completed at 8:11pm but those small repaired files are not compacted as it picks unrepaired new sstables over the small repaired sstables. However, it did show a downwards trend as compaction was slightly faster than insertion and would probably eventually end with the repaired files compacted. During the unpatched test, it only showed 2 pending compactions with 22k~ file descriptors open/10k~ sstables. At 8:33pm I disabled the traffic completely to hurry this along. SSTables in each level: [10347/4, 5, 0, 0, 0, 0, 0, 0, 0] > LCS estimated compaction tasks does not take number of files into account > - > > Key: CASSANDRA-13354 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13354 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: Cassandra 2.2.9 >Reporter: Jan Karlsson >Assignee: Jan Karlsson > Attachments: 13354-trunk.txt, patchedTest.png, unpatchedTest.png > > > In LCS, the way we estimate number of compaction tasks remaining for L0 is by > taking the size of a SSTable and multiply it by four. This would give 4*160mb > with default settings. This calculation is used to determine whether repaired > or repaired data is being compacted. > Now this works well until you take repair into account. Repair streams over > many many sstables which could be smaller than the configured SSTable size > depending on your use case. In our case we are talking about many thousands > of tiny SSTables. As number of files increases one can run into any number of > problems, including GC issues, too many open files or plain increase in read > latency. > With the current algorithm we will choose repaired or unrepaired depending on > whichever side has more data in it. Even if the repaired files outnumber the > unrepaired files by a large margin. > Similarily, our algorithm that selects compaction candidates takes up to 32 > SSTables at a time in L0, however our estimated task calculation does not > take this number into account. These two mechanisms should be aligned with > each other. > I propose that we take the number of files in L0 into account when estimating > remaining tasks. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13354) LCS estimated compaction tasks does not take number of files into account
[ https://issues.apache.org/jira/browse/CASSANDRA-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Karlsson updated CASSANDRA-13354: - Attachment: patchedTest.png > LCS estimated compaction tasks does not take number of files into account > - > > Key: CASSANDRA-13354 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13354 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: Cassandra 2.2.9 >Reporter: Jan Karlsson >Assignee: Jan Karlsson > Attachments: 13354-trunk.txt, patchedTest.png, unpatchedTest.png > > > In LCS, the way we estimate number of compaction tasks remaining for L0 is by > taking the size of a SSTable and multiply it by four. This would give 4*160mb > with default settings. This calculation is used to determine whether repaired > or repaired data is being compacted. > Now this works well until you take repair into account. Repair streams over > many many sstables which could be smaller than the configured SSTable size > depending on your use case. In our case we are talking about many thousands > of tiny SSTables. As number of files increases one can run into any number of > problems, including GC issues, too many open files or plain increase in read > latency. > With the current algorithm we will choose repaired or unrepaired depending on > whichever side has more data in it. Even if the repaired files outnumber the > unrepaired files by a large margin. > Similarily, our algorithm that selects compaction candidates takes up to 32 > SSTables at a time in L0, however our estimated task calculation does not > take this number into account. These two mechanisms should be aligned with > each other. > I propose that we take the number of files in L0 into account when estimating > remaining tasks. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13354) LCS estimated compaction tasks does not take number of files into account
[ https://issues.apache.org/jira/browse/CASSANDRA-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Karlsson updated CASSANDRA-13354: - Attachment: unpatchedTest.png > LCS estimated compaction tasks does not take number of files into account > - > > Key: CASSANDRA-13354 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13354 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: Cassandra 2.2.9 >Reporter: Jan Karlsson >Assignee: Jan Karlsson > Attachments: 13354-trunk.txt, patchedTest.png, unpatchedTest.png > > > In LCS, the way we estimate number of compaction tasks remaining for L0 is by > taking the size of a SSTable and multiply it by four. This would give 4*160mb > with default settings. This calculation is used to determine whether repaired > or repaired data is being compacted. > Now this works well until you take repair into account. Repair streams over > many many sstables which could be smaller than the configured SSTable size > depending on your use case. In our case we are talking about many thousands > of tiny SSTables. As number of files increases one can run into any number of > problems, including GC issues, too many open files or plain increase in read > latency. > With the current algorithm we will choose repaired or unrepaired depending on > whichever side has more data in it. Even if the repaired files outnumber the > unrepaired files by a large margin. > Similarily, our algorithm that selects compaction candidates takes up to 32 > SSTables at a time in L0, however our estimated task calculation does not > take this number into account. These two mechanisms should be aligned with > each other. > I propose that we take the number of files in L0 into account when estimating > remaining tasks. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs
[ https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939070#comment-15939070 ] William R. Speirs commented on CASSANDRA-13368: --- Digging deeper into this, the code for SLF4J seems to back-up the claim that it looks for a "throwable candidate": {noformat} static final Throwable getThrowableCandidate(Object[] argArray) { if (argArray == null || argArray.length == 0) { return null; } final Object lastEntry = argArray[argArray.length - 1]; if (lastEntry instanceof Throwable) { return (Throwable) lastEntry; } return null; } {noformat} I'm left to conclude that Java does not believe {{e instanceof Throwable}} is true, yet {{public void onError(Throwable e)}} is called in Cassandra. Any thoughts? > Exception Stack not Printed as Intended in Error Logs > - > > Key: CASSANDRA-13368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13368 > Project: Cassandra > Issue Type: Bug >Reporter: William R. Speirs >Priority: Trivial > Labels: lhf > Fix For: 2.1.x > > Attachments: cassandra-13368-2.1.patch > > > There are a number of instances where it appears the programmer intended to > print a stack trace in an error message, but it is not actually being > printed. For example, in {{BlacklistedDirectories.java:54}}: > {noformat} > catch (Exception e) > { > JVMStabilityInspector.inspectThrowable(e); > logger.error("error registering MBean {}", MBEAN_NAME, e); > //Allow the server to start even if the bean can't be registered > } > {noformat} > The logger will use the second argument for the braces, but will ignore the > exception {{e}}. It would be helpful to have the stack traces of these > exceptions printed. I propose adding a second line that prints the full stack > trace: {{logger.error(e.getMessage(), e);}} > On the 2.1 branch, I found 8 instances of these types of messages: > {noformat} > db/BlacklistedDirectories.java:54:logger.error("error registering > MBean {}", MBEAN_NAME, e); > io/sstable/SSTableReader.java:512:logger.error("Corrupt sstable > {}; skipped", descriptor, e); > net/OutboundTcpConnection.java:228:logger.error("error > processing a message intended for {}", poolReference.endPoint(), e); > net/OutboundTcpConnection.java:314:logger.error("error > writing to {}", poolReference.endPoint(), e); > service/CassandraDaemon.java:231:logger.error("Exception in > thread {}", t, e); > service/CassandraDaemon.java:562:logger.error("error > registering MBean {}", MBEAN_NAME, e); > streaming/StreamSession.java:512:logger.error("[Stream #{}] > Streaming error occurred", planId(), e); > transport/Server.java:442:logger.error("Problem retrieving > RPC address for {}", endpoint, e); > {noformat} > And one where it'll print the {{toString()}} version of the exception: > {noformat} > db/Directories.java:689:logger.error("Could not calculate the > size of {}. {}", input, e); > {noformat} > I'm happy to create a patch for each branch, just need a little guidance on > how to do so. We're currently running 2.1 so I started there. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939043#comment-15939043 ] Stefan Podkowinski commented on CASSANDRA-13370: The initial stacktrace from the description shows that NativePRNG tries do write the seed into /dev/urandom. Any values written there will not reset the seed system wide (which would be funny), but simply provide a bit of additional entropy. SHA1RPNG seems to work differently though, but the [javadoc|https://docs.oracle.com/javase/8/docs/api/java/security/SecureRandom.html#setSeed-byte:A-] isn't very clear either what exactly to expect. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938769#comment-15938769 ] Ariel Weisberg edited comment on CASSANDRA-13370 at 3/23/17 6:47 PM: - Sorry to keep changing my mind. Still digesting the fact that we can fix just this one test and keep using /dev/urandom. I checked and we don't use seeding of SecureRandom outside of this test. So I propose going with your original solution of using SHA1PRNG, seeding it the way the test does so that the test is deterministic as originally intended, and not changing anything in build.xml. ||Code|utests|| |[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...aweisberg:cassandra-13370-3.11?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13370-3.11-testall/2/]| was (Author: aweisberg): Sorry to keep changing my mind. Still digesting the fact that we can fix just this one test and keep using /dev/urandom. I checked and we don't use seeding of SecureRandom outside of this test. So I propose going with your original solution of using SHA1PRNG, seeding it the way the test does so that the test is deterministic as originally intended, and not changing anything in build.xml. ||Code|utests|| |[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...aweisberg:cassandra-13370-3.11?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13370-3.11-testall/1/]| > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15939000#comment-15939000 ] Ariel Weisberg commented on CASSANDRA-13370: Well it's still news to me that NativePRNG does mixing. Thanks for bringing it up. I guess the test author didn't rely on it being deterministic. The right thing to do anyways is to generate a random seed and log it. So you get fuzzing but you can still reproduce a failure. I amended my original commit. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938952#comment-15938952 ] Stefan Podkowinski edited comment on CASSANDRA-13370 at 3/23/17 6:30 PM: - -I don't think seeding SecureRandom does what you think it does, Ariel. The provided seed will just get "mixed" with the current RNG seed. This is different from e.g. seeding java.util.Random and will not make the test deterministic.- (does only seem to apply to the native PRNG, just tested with SHA1PRNG and it worked as described by you, my mistake here) was (Author: spo...@gmail.com): I don't think seeding SecureRandom does what you think it does, Ariel. The provided seed will just get "mixed" with the current RNG seed. This is different from e.g. seeding java.util.Random and will not make the test deterministic. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938952#comment-15938952 ] Stefan Podkowinski commented on CASSANDRA-13370: I don't think seeding SecureRandom does what you think it does, Ariel. The provided seed will just get "mixed" with the current RNG seed. This is different from e.g. seeding java.util.Random and will not make the test deterministic. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
cassandra-builds git commit: Skip cassandra6, 7 slaves until name resolution is fixed
Repository: cassandra-builds Updated Branches: refs/heads/master 160eecc93 -> 680bf928f Skip cassandra6,7 slaves until name resolution is fixed Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/680bf928 Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/680bf928 Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/680bf928 Branch: refs/heads/master Commit: 680bf928f46f0fd5d4943d3c1afc524bf56b4734 Parents: 160eecc Author: Michael Shuler Authored: Thu Mar 23 13:10:13 2017 -0500 Committer: Michael Shuler Committed: Thu Mar 23 13:10:13 2017 -0500 -- jenkins-dsl/cassandra_job_dsl_seed.groovy | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/680bf928/jenkins-dsl/cassandra_job_dsl_seed.groovy -- diff --git a/jenkins-dsl/cassandra_job_dsl_seed.groovy b/jenkins-dsl/cassandra_job_dsl_seed.groovy index 1ca7108..6e9765e 100644 --- a/jenkins-dsl/cassandra_job_dsl_seed.groovy +++ b/jenkins-dsl/cassandra_job_dsl_seed.groovy @@ -6,7 +6,7 @@ def jobDescription = 'Apache Cassandra DSL-generated job - DSL git repo: https://git-wip-us.apache.org/repos/asf?p=cassandra-builds.git";>cassandra-builds' def jdkLabel = 'JDK 1.8 (latest)' -def slaveLabel = 'cassandra' +def slaveLabel = 'cassandra&&!(cassandra6||cassandra7)' // TEMP - skip cassandra6,7 slaves until name resolution is fixed: INFRA-13567 // The dtest-large target needs to run on >=32G slaves, so we provide an "OR" list of those servers def largeSlaveLabel = 'cassandra6||cassandra7' def mainRepo = 'https://git-wip-us.apache.org/repos/asf/cassandra.git'
[jira] [Commented] (CASSANDRA-12728) Handling partially written hint files
[ https://issues.apache.org/jira/browse/CASSANDRA-12728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938918#comment-15938918 ] Garvit Juniwal commented on CASSANDRA-12728: [~jjirsa] In my patch, the only exception that is ignored is EOF error. So there is no possibility of missing more hints by ignoring this error. From my cursory reading of https://github.com/apache/cassandra/blob/cassandra-3.9/src/java/org/apache/cassandra/db/commitlog/CommitLogReplayer.java#L390-L413, seems like we are ignoring errors due to incomplete flushes (quoting: "Ignoring commit log replay error likely due to incomplete flush to disk") without caring about any operator policy, which is the right thing to do IMO and that is what I am trying to achieve in the patch as well. Lmk if I have misunderstood something. > Handling partially written hint files > - > > Key: CASSANDRA-12728 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12728 > Project: Cassandra > Issue Type: Bug >Reporter: Sharvanath Pathak > Labels: lhf > Attachments: CASSANDRA-12728.patch > > > {noformat} > ERROR [HintsDispatcher:1] 2016-09-28 17:44:43,397 > HintsDispatchExecutor.java:225 - Failed to dispatch hints file > d5d7257c-9f81-49b2-8633-6f9bda6e3dea-1474892654160-1.hints: file is corrupted > ({}) > org.apache.cassandra.io.FSReadError: java.io.EOFException > at > org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:282) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:252) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.utils.AbstractIterator.hasNext(AbstractIterator.java:47) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatcher.sendHints(HintsDispatcher.java:156) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatcher.sendHintsAndAwait(HintsDispatcher.java:137) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:119) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatcher.dispatch(HintsDispatcher.java:91) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.deliver(HintsDispatchExecutor.java:259) > [apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:242) > [apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.dispatch(HintsDispatchExecutor.java:220) > [apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsDispatchExecutor$DispatchHintsTask.run(HintsDispatchExecutor.java:199) > [apache-cassandra-3.0.6.jar:3.0.6] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_77] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > [na:1.8.0_77] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > [na:1.8.0_77] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77] > Caused by: java.io.EOFException: null > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:68) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.io.util.RebufferingInputStream.readFully(RebufferingInputStream.java:60) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.ChecksummedDataInput.readFully(ChecksummedDataInput.java:126) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:402) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsReader$BuffersIterator.readBuffer(HintsReader.java:310) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNextInternal(HintsReader.java:301) > ~[apache-cassandra-3.0.6.jar:3.0.6] > at > org.apache.cassandra.hints.HintsReader$BuffersIterator.computeNext(HintsReader.java:278) > ~[apache-cassandra-3.0.6.jar:3.0.6] > ... 15 common frames omitted > {noformat} > We've found out that the hint file was truncated because there was a hard > reboot around the time of last write to the file. I think we basically need > to handle partially written hint files. Also, the CRC file does not exist in > this case (p
[jira] [Resolved] (CASSANDRA-13247) index on udt built failed and no data could be inserted
[ https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer resolved CASSANDRA-13247. Resolution: Fixed Fix Version/s: 4.0 3.11.0 > index on udt built failed and no data could be inserted > --- > > Key: CASSANDRA-13247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13247 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: mashudong >Assignee: Andrés de la Peña >Priority: Critical > Fix For: 3.11.0, 4.0 > > Attachments: udt_index.txt > > > index on udt built failed and no data could be inserted > steps to reproduce: > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '2'} AND durable_writes = true; > CREATE TYPE ks1.address ( > street text, > city text, > zip_code int, > phones set > ); > CREATE TYPE ks1.fullname ( > firstname text, > lastname text > ); > CREATE TABLE ks1.users ( > id uuid PRIMARY KEY, > addresses map>, > age int, > direct_reports set>, > name fullname > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} > allow filtering; > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > WARN [ReadStage-2] 2017-02-22 16:59:33,392 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for > 'complex' types > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:323) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1803) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.servic
[jira] [Updated] (CASSANDRA-13247) index on udt built failed and no data could be inserted
[ https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benjamin Lerer updated CASSANDRA-13247: --- Component/s: CQL > index on udt built failed and no data could be inserted > --- > > Key: CASSANDRA-13247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13247 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: mashudong >Assignee: Andrés de la Peña >Priority: Critical > Fix For: 3.11.0, 4.0 > > Attachments: udt_index.txt > > > index on udt built failed and no data could be inserted > steps to reproduce: > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '2'} AND durable_writes = true; > CREATE TYPE ks1.address ( > street text, > city text, > zip_code int, > phones set > ); > CREATE TYPE ks1.fullname ( > firstname text, > lastname text > ); > CREATE TABLE ks1.users ( > id uuid PRIMARY KEY, > addresses map>, > age int, > direct_reports set>, > name fullname > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} > allow filtering; > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > WARN [ReadStage-2] 2017-02-22 16:59:33,392 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for > 'complex' types > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:323) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1803) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938769#comment-15938769 ] Ariel Weisberg commented on CASSANDRA-13370: Sorry to keep changing my mind. Still digesting the fact that we can fix just this one test and keep using /dev/urandom. I checked and we don't use seeding of SecureRandom outside of this test. So I propose going with your original solution of using SHA1PRNG, seeding it the way the test does so that the test is deterministic as originally intended, and not changing anything in build.xml. ||Code|utests|| |[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...aweisberg:cassandra-13370-3.11?expand=1]|[utests|https://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-cassandra-13370-3.11-testall/1/]| > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13370: --- Fix Version/s: 4.x 3.11.x > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Fix For: 3.11.x, 4.x > > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13247) index on udt built failed and no data could be inserted
[ https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938748#comment-15938748 ] Benjamin Lerer commented on CASSANDRA-13247: Committed into 3.11 at 82d3cdcd6cfeff043c92ea7a060498942130feb5 and merged into trunk. > index on udt built failed and no data could be inserted > --- > > Key: CASSANDRA-13247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13247 > Project: Cassandra > Issue Type: Bug >Reporter: mashudong >Assignee: Andrés de la Peña >Priority: Critical > Attachments: udt_index.txt > > > index on udt built failed and no data could be inserted > steps to reproduce: > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '2'} AND durable_writes = true; > CREATE TYPE ks1.address ( > street text, > city text, > zip_code int, > phones set > ); > CREATE TYPE ks1.fullname ( > firstname text, > lastname text > ); > CREATE TABLE ks1.users ( > id uuid PRIMARY KEY, > addresses map>, > age int, > direct_reports set>, > name fullname > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} > allow filtering; > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > WARN [ReadStage-2] 2017-02-22 16:59:33,392 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for > 'complex' types > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:323) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1803) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra
[jira] [Commented] (CASSANDRA-13247) index on udt built failed and no data could be inserted
[ https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938747#comment-15938747 ] Benjamin Lerer commented on CASSANDRA-13247: +1 > index on udt built failed and no data could be inserted > --- > > Key: CASSANDRA-13247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13247 > Project: Cassandra > Issue Type: Bug >Reporter: mashudong >Assignee: Andrés de la Peña >Priority: Critical > Attachments: udt_index.txt > > > index on udt built failed and no data could be inserted > steps to reproduce: > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '2'} AND durable_writes = true; > CREATE TYPE ks1.address ( > street text, > city text, > zip_code int, > phones set > ); > CREATE TYPE ks1.fullname ( > firstname text, > lastname text > ); > CREATE TABLE ks1.users ( > id uuid PRIMARY KEY, > addresses map>, > age int, > direct_reports set>, > name fullname > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} > allow filtering; > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > WARN [ReadStage-2] 2017-02-22 16:59:33,392 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for > 'complex' types > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:323) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1803) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2486) > ~[apache-cassan
[1/3] cassandra git commit: Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 a85eeefe8 -> 82d3cdcd6 refs/heads/trunk 9330409ac -> 18c6ed25e Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns patch by Andrés de la Peña; reviewed by Benjamin Lerer for CASSANDRA-13247 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82d3cdcd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82d3cdcd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82d3cdcd Branch: refs/heads/cassandra-3.11 Commit: 82d3cdcd6cfeff043c92ea7a060498942130feb5 Parents: a85eeef Author: Andrés de la Peña Authored: Thu Mar 23 17:40:04 2017 +0100 Committer: Benjamin Lerer Committed: Thu Mar 23 17:40:04 2017 +0100 -- CHANGES.txt | 1 + .../cassandra/cql3/SingleColumnRelation.java| 6 ++ .../cql3/statements/CreateIndexStatement.java | 2 + .../validation/entities/SecondaryIndexTest.java | 103 +++ .../SelectSingleColumnRelationTest.java | 24 + 5 files changed, 136 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 728e3e7..6644796 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns (CASSANDRA-13247) * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java -- diff --git a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java index ae07f56..e0ee519 100644 --- a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java +++ b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java @@ -273,6 +273,12 @@ public final class SingleColumnRelation extends Relation checkTrue(isEQ(), "Only EQ relations are supported on map entries"); } +// Non-frozen UDTs don't support any operator +checkFalse(receiver.type.isUDT() && receiver.type.isMultiCell(), + "Non-frozen UDT column '%s' (%s) cannot be restricted by any relation", + receiver.name, + receiver.type.asCQL3Type()); + if (receiver.type.isCollection()) { // We don't support relations against entire collections (unless they're frozen), like "numbers = {1, 2, 3}" http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java index ed4658f..204edf4 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java @@ -134,6 +134,8 @@ public class CreateIndexStatement extends SchemaAlteringStatement validateIsSimpleIndexIfTargetColumnNotCollection(cd, target); validateTargetColumnIsMapIfIndexInvolvesKeys(isMap, target); } + +checkFalse(cd.type.isUDT() && cd.type.isMultiCell(), "Secondary indexes are not supported on non-frozen UDTs"); } if (!Strings.isNullOrEmpty(indexName)) http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java index 88c6f17..013e41d 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java @@ -1409,6 +1409,109 @@ public class SecondaryIndexTest extends CQLTester "CREATE INDEX ON %s (t)"); } +@Test +public void testIndexOnFrozenUDT() throws Throwable +{ +String type = createType("CREATE TYPE %s (a int)"); +String tableName = createTable("CREATE TABLE %s (k
[3/3] cassandra git commit: Merge branch cassandra-3.11 into trunk
Merge branch cassandra-3.11 into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/18c6ed25 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/18c6ed25 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/18c6ed25 Branch: refs/heads/trunk Commit: 18c6ed25e30c0cc444d7cfda929ef7677309c57b Parents: 9330409 82d3cdc Author: Benjamin Lerer Authored: Thu Mar 23 17:45:54 2017 +0100 Committer: Benjamin Lerer Committed: Thu Mar 23 17:45:54 2017 +0100 -- CHANGES.txt | 1 + .../cassandra/cql3/SingleColumnRelation.java| 6 ++ .../cql3/statements/CreateIndexStatement.java | 2 + .../validation/entities/SecondaryIndexTest.java | 103 +++ .../SelectSingleColumnRelationTest.java | 24 + 5 files changed, 136 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/18c6ed25/CHANGES.txt -- diff --cc CHANGES.txt index a5145a6,6644796..09e206e --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,51 -1,5 +1,52 @@@ +4.0 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360) + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359) + * Incremental repair not streaming correct sstables (CASSANDRA-13328) + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300) + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID functions (CASSANDRA-13132) + * Remove config option index_interval (CASSANDRA-10671) + * Reduce lock contention for collection types and serializers (CASSANDRA-13271) + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283) + * Avoid synchronized on prepareForRepair in ActiveRepairService (CASSANDRA-9292) + * Adds the ability to use uncompressed chunks in compressed files (CASSANDRA-10520) + * Don't flush sstables when streaming for incremental repair (CASSANDRA-13226) + * Remove unused method (CASSANDRA-13227) + * Fix minor bugs related to #9143 (CASSANDRA-13217) + * Output warning if user increases RF (CASSANDRA-13079) + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081) + * Add support for + and - operations on dates (CASSANDRA-11936) + * Fix consistency of incrementally repaired data (CASSANDRA-9143) + * Increase commitlog version (CASSANDRA-13161) + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425) + * Refactor ColumnCondition (CASSANDRA-12981) + * Parallelize streaming of different keyspaces (CASSANDRA-4663) + * Improved compactions metrics (CASSANDRA-13015) + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031) + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855) + * Thrift removal (CASSANDRA-5) + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716) + * Add column definition kind to dropped columns in schema (CASSANDRA-12705) + * Add (automate) Nodetool Documentation (CASSANDRA-12672) + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422) + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080) + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084) + * Require forceful decommission if number of nodes is less than replication factor (CASSANDRA-12510) + * Allow IN restrictions on column families with collections (CASSANDRA-12654) + * Log message size in trace message in OutboundTcpConnection (CASSANDRA-13028) + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029) + * Add mutation size and batch metrics (CASSANDRA-12649) + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999) + * Expose time spent waiting in thread pool queue (CASSANDRA-8398) + * Conditionally update index built status to avoid unnecessary flushes (CASSANDRA-12969) + * cqlsh auto completion: refactor definition of compaction strategy options (CASSANDRA-12946) + * Add support for arithmetic operators (CASSANDRA-11935) + * Add histogram for delay to deliver hints (CASSANDRA-13234) + + 3.11.0 + * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns (CASSANDRA-13247) * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) http://git-wip-us.apache.org/repos/asf/cassandra/blob/18c6ed25/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java -- http
[2/3] cassandra git commit: Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns
Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns patch by Andrés de la Peña; reviewed by Benjamin Lerer for CASSANDRA-13247 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/82d3cdcd Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/82d3cdcd Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/82d3cdcd Branch: refs/heads/trunk Commit: 82d3cdcd6cfeff043c92ea7a060498942130feb5 Parents: a85eeef Author: Andrés de la Peña Authored: Thu Mar 23 17:40:04 2017 +0100 Committer: Benjamin Lerer Committed: Thu Mar 23 17:40:04 2017 +0100 -- CHANGES.txt | 1 + .../cassandra/cql3/SingleColumnRelation.java| 6 ++ .../cql3/statements/CreateIndexStatement.java | 2 + .../validation/entities/SecondaryIndexTest.java | 103 +++ .../SelectSingleColumnRelationTest.java | 24 + 5 files changed, 136 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 728e3e7..6644796 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * Forbid SELECT restrictions and CREATE INDEX over non-frozen UDT columns (CASSANDRA-13247) * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java -- diff --git a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java index ae07f56..e0ee519 100644 --- a/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java +++ b/src/java/org/apache/cassandra/cql3/SingleColumnRelation.java @@ -273,6 +273,12 @@ public final class SingleColumnRelation extends Relation checkTrue(isEQ(), "Only EQ relations are supported on map entries"); } +// Non-frozen UDTs don't support any operator +checkFalse(receiver.type.isUDT() && receiver.type.isMultiCell(), + "Non-frozen UDT column '%s' (%s) cannot be restricted by any relation", + receiver.name, + receiver.type.asCQL3Type()); + if (receiver.type.isCollection()) { // We don't support relations against entire collections (unless they're frozen), like "numbers = {1, 2, 3}" http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java index ed4658f..204edf4 100644 --- a/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/CreateIndexStatement.java @@ -134,6 +134,8 @@ public class CreateIndexStatement extends SchemaAlteringStatement validateIsSimpleIndexIfTargetColumnNotCollection(cd, target); validateTargetColumnIsMapIfIndexInvolvesKeys(isMap, target); } + +checkFalse(cd.type.isUDT() && cd.type.isMultiCell(), "Secondary indexes are not supported on non-frozen UDTs"); } if (!Strings.isNullOrEmpty(indexName)) http://git-wip-us.apache.org/repos/asf/cassandra/blob/82d3cdcd/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java index 88c6f17..013e41d 100644 --- a/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java +++ b/test/unit/org/apache/cassandra/cql3/validation/entities/SecondaryIndexTest.java @@ -1409,6 +1409,109 @@ public class SecondaryIndexTest extends CQLTester "CREATE INDEX ON %s (t)"); } +@Test +public void testIndexOnFrozenUDT() throws Throwable +{ +String type = createType("CREATE TYPE %s (a int)"); +String tableName = createTable("CREATE TABLE %s (k int PRIMARY KEY, v frozen<" + type + ">)"); + +Object udt1 = userType("a", 1); +Object udt2 = userType("a", 2); + +exe
[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13370: --- Reviewer: Ariel Weisberg > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender
[ https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13317: --- Fix Version/s: (was: 3.11.x) (was: 4.x) 4.0 3.11.0 > Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due > to includeCallerData being false by default no appender > > > Key: CASSANDRA-13317 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13317 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Michael Kjellman >Assignee: Michael Kjellman > Fix For: 3.11.0, 4.0 > > Attachments: 13317_v1.diff > > > We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - > %msg%n". > %F:%L is intended to print the Filename:Line Number. For performance reasons > logback (like log4j2) disables tracking line numbers as it requires the > entire stack to be materialized every time. > This causes logs to look like: > WARN [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not > supported by java driver > INFO [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping > replay > INFO [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement > caches with 14 MB > INFO [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo > When instead you'd expect something like: > INFO [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - > Initializing system.available_ranges > INFO [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - > Initializing system.transferred_ranges > INFO [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - > Initializing system.views_builds_in_progress > The fix is to add "true" to the > appender config to enable the line number and stack tracing. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13317) Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due to includeCallerData being false by default no appender
[ https://issues.apache.org/jira/browse/CASSANDRA-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ariel Weisberg updated CASSANDRA-13317: --- Resolution: Fixed Status: Resolved (was: Patch Available) Committed as [3e95c5b0c574383e7da9a5e152b7be8aa122af9f|https://github.com/apache/cassandra/commit/3e95c5b0c574383e7da9a5e152b7be8aa122af9f] > Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern due > to includeCallerData being false by default no appender > > > Key: CASSANDRA-13317 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13317 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Michael Kjellman >Assignee: Michael Kjellman > Fix For: 3.11.x, 4.x > > Attachments: 13317_v1.diff > > > We specify the logging pattern as "%-5level [%thread] %date{ISO8601} %F:%L - > %msg%n". > %F:%L is intended to print the Filename:Line Number. For performance reasons > logback (like log4j2) disables tracking line numbers as it requires the > entire stack to be materialized every time. > This causes logs to look like: > WARN [main] 2017-03-09 13:27:11,272 ?:? - Protocol Version 5/v5-beta not > supported by java driver > INFO [main] 2017-03-09 13:27:11,813 ?:? - No commitlog files found; skipping > replay > INFO [main] 2017-03-09 13:27:12,477 ?:? - Initialized prepared statement > caches with 14 MB > INFO [main] 2017-03-09 13:27:12,727 ?:? - Initializing system.IndexInfo > When instead you'd expect something like: > INFO [main] 2017-03-09 13:23:44,204 ColumnFamilyStore.java:419 - > Initializing system.available_ranges > INFO [main] 2017-03-09 13:23:44,210 ColumnFamilyStore.java:419 - > Initializing system.transferred_ranges > INFO [main] 2017-03-09 13:23:44,215 ColumnFamilyStore.java:419 - > Initializing system.views_builds_in_progress > The fix is to add "true" to the > appender config to enable the line number and stack tracing. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938724#comment-15938724 ] Jay Zhuang commented on CASSANDRA-13370: Make sense. Thanks [~spod] Updated the [patch|https://github.com/cooldoger/cassandra/commit/e89ac4407f387dc9607b21d3ef9ece6d4bda4bd8], passed the test locally on MacOS and Linux. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jay Zhuang updated CASSANDRA-13370: --- Attachment: 13370-trunk-update.txt > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Attachments: 13370-trunk.txt, 13370-trunk-update.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938698#comment-15938698 ] Ariel Weisberg commented on CASSANDRA-13370: Oh, I misunderstood. So it's removing the seed that will stop Java from writing to /dev/random. Yes I think that would be a better approach. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Attachments: 13370-trunk.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13340) Bugs handling range tombstones in the sstable iterators
[ https://issues.apache.org/jira/browse/CASSANDRA-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-13340: - Resolution: Fixed Fix Version/s: (was: 3.11.x) (was: 3.0.x) 3.11.0 3.0.13 Status: Resolved (was: Ready to Commit) Committed (with nits fixed), thanks. > Bugs handling range tombstones in the sstable iterators > --- > > Key: CASSANDRA-13340 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13340 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne >Priority: Critical > Fix For: 3.0.13, 3.11.0 > > > There is 2 bugs in the way sstable iterators handle range tombstones: > # empty range tombstones can be returned due to a strict comparison that > shouldn't be. > # the sstable reversed iterator can actually return completely bogus results > when range tombstones are spanning multiple index blocks. > The 2 bugs are admittedly separate but as they both impact the same area of > code and are both range tombstones related, I suggest just fixing both here > (unless something really really mind). > Marking the ticket critical mostly for the 2nd bug: it can truly make use > return bad results on reverse queries. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk * cassandra-3.11: Bugs handling range tombstones in the sstable iterators Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9330409a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9330409a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9330409a Branch: refs/heads/trunk Commit: 9330409accf0506526d25e17e70f89e5cb6a341e Parents: ea662ce a85eeef Author: Sylvain Lebresne Authored: Thu Mar 23 17:18:36 2017 +0100 Committer: Sylvain Lebresne Committed: Thu Mar 23 17:18:36 2017 +0100 -- --
[2/3] cassandra git commit: Bugs handling range tombstones in the sstable iterators
Bugs handling range tombstones in the sstable iterators patch by Sylvain Lebresne; reviewed by Branimir Lambov for CASSANDRA-13340 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a85eeefe Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a85eeefe Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a85eeefe Branch: refs/heads/trunk Commit: a85eeefe88eb036a9cd9fa85a1c8c31c2bfad78a Parents: 3e95c5b Author: Sylvain Lebresne Authored: Thu Mar 16 17:05:15 2017 +0100 Committer: Sylvain Lebresne Committed: Thu Mar 23 17:17:16 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/db/ClusteringPrefix.java | 2 +- .../cassandra/db/UnfilteredDeserializer.java| 1 - .../db/columniterator/SSTableIterator.java | 11 +- .../columniterator/SSTableReversedIterator.java | 126 +++ .../cql3/validation/operations/DeleteTest.java | 70 +++ 6 files changed, 181 insertions(+), 30 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c58fad8..728e3e7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -13,6 +13,7 @@ * NoReplicationTokenAllocator should work with zero replication factor (CASSANDRA-12983) * Address message coalescing regression (CASSANDRA-12676) Merged from 3.0: + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340) * Fix CONTAINS filtering for null collections (CASSANDRA-13246) * Applying: Use a unique metric reservoir per test run when using Cassandra-wide metrics residing in MBeans (CASSANDRA-13216) * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/src/java/org/apache/cassandra/db/ClusteringPrefix.java -- diff --git a/src/java/org/apache/cassandra/db/ClusteringPrefix.java b/src/java/org/apache/cassandra/db/ClusteringPrefix.java index 340e237..1ecc92d 100644 --- a/src/java/org/apache/cassandra/db/ClusteringPrefix.java +++ b/src/java/org/apache/cassandra/db/ClusteringPrefix.java @@ -482,7 +482,7 @@ public interface ClusteringPrefix extends IMeasurableMemory, Clusterable } if (bound.size() == nextSize) -return nextKind.compareTo(bound.kind()); +return Kind.compare(nextKind, bound.kind()); // We know that we'll have exited already if nextSize < bound.size return -bound.kind().comparedToClustering; http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java -- diff --git a/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java b/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java index 79b8636..b977907 100644 --- a/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java +++ b/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java @@ -690,6 +690,5 @@ public abstract class UnfilteredDeserializer } } } - } } http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java -- diff --git a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java index b3c2e94..e21bd72 100644 --- a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java +++ b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java @@ -138,7 +138,14 @@ public class SSTableIterator extends AbstractSSTableIterator { assert deserializer != null; -if (!deserializer.hasNext() || deserializer.compareNextTo(end) > 0) +// We use a same reasoning as in handlePreSliceData regarding the strictness of the inequality below. +// We want to exclude deserialized unfiltered equal to end, because 1) we won't miss any rows since those +// woudn't be equal to a slice bound and 2) a end bound can be equal to a start bound +// (EXCL_END(x) == INCL_START(x) for instance) and in that case we don't want to return start bound because +// it's fundamentally excluded. And if the bound is a end (for a range tombstone), it means it's exactly +// our slice end, but in that case we will properly close the range tombstone anyway as part of our "close +// an open marker" code in hasNextInte
[1/3] cassandra git commit: Bugs handling range tombstones in the sstable iterators
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 3e95c5b0c -> a85eeefe8 refs/heads/trunk ea662ce21 -> 9330409ac Bugs handling range tombstones in the sstable iterators patch by Sylvain Lebresne; reviewed by Branimir Lambov for CASSANDRA-13340 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a85eeefe Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a85eeefe Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a85eeefe Branch: refs/heads/cassandra-3.11 Commit: a85eeefe88eb036a9cd9fa85a1c8c31c2bfad78a Parents: 3e95c5b Author: Sylvain Lebresne Authored: Thu Mar 16 17:05:15 2017 +0100 Committer: Sylvain Lebresne Committed: Thu Mar 23 17:17:16 2017 +0100 -- CHANGES.txt | 1 + .../apache/cassandra/db/ClusteringPrefix.java | 2 +- .../cassandra/db/UnfilteredDeserializer.java| 1 - .../db/columniterator/SSTableIterator.java | 11 +- .../columniterator/SSTableReversedIterator.java | 126 +++ .../cql3/validation/operations/DeleteTest.java | 70 +++ 6 files changed, 181 insertions(+), 30 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c58fad8..728e3e7 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -13,6 +13,7 @@ * NoReplicationTokenAllocator should work with zero replication factor (CASSANDRA-12983) * Address message coalescing regression (CASSANDRA-12676) Merged from 3.0: + * Bugs handling range tombstones in the sstable iterators (CASSANDRA-13340) * Fix CONTAINS filtering for null collections (CASSANDRA-13246) * Applying: Use a unique metric reservoir per test run when using Cassandra-wide metrics residing in MBeans (CASSANDRA-13216) * Propagate row deletions in 2i tables on upgrade (CASSANDRA-13320) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/src/java/org/apache/cassandra/db/ClusteringPrefix.java -- diff --git a/src/java/org/apache/cassandra/db/ClusteringPrefix.java b/src/java/org/apache/cassandra/db/ClusteringPrefix.java index 340e237..1ecc92d 100644 --- a/src/java/org/apache/cassandra/db/ClusteringPrefix.java +++ b/src/java/org/apache/cassandra/db/ClusteringPrefix.java @@ -482,7 +482,7 @@ public interface ClusteringPrefix extends IMeasurableMemory, Clusterable } if (bound.size() == nextSize) -return nextKind.compareTo(bound.kind()); +return Kind.compare(nextKind, bound.kind()); // We know that we'll have exited already if nextSize < bound.size return -bound.kind().comparedToClustering; http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java -- diff --git a/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java b/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java index 79b8636..b977907 100644 --- a/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java +++ b/src/java/org/apache/cassandra/db/UnfilteredDeserializer.java @@ -690,6 +690,5 @@ public abstract class UnfilteredDeserializer } } } - } } http://git-wip-us.apache.org/repos/asf/cassandra/blob/a85eeefe/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java -- diff --git a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java index b3c2e94..e21bd72 100644 --- a/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java +++ b/src/java/org/apache/cassandra/db/columniterator/SSTableIterator.java @@ -138,7 +138,14 @@ public class SSTableIterator extends AbstractSSTableIterator { assert deserializer != null; -if (!deserializer.hasNext() || deserializer.compareNextTo(end) > 0) +// We use a same reasoning as in handlePreSliceData regarding the strictness of the inequality below. +// We want to exclude deserialized unfiltered equal to end, because 1) we won't miss any rows since those +// woudn't be equal to a slice bound and 2) a end bound can be equal to a start bound +// (EXCL_END(x) == INCL_START(x) for instance) and in that case we don't want to return start bound because +// it's fundamentally excluded. And if the bound is a end (for a range tombstone), it means it's exactly +// our slice end
[jira] [Comment Edited] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs
[ https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938645#comment-15938645 ] William R. Speirs edited comment on CASSANDRA-13368 at 3/23/17 3:59 PM: [~spo...@gmail.com] hu, that's pretty curious then. I double-checked and Cassandra is using SLF4J v1.7.2. From my logs for example: {noformat} ERROR [STREAM-OUT-/X.X.X.X] 2017-03-19 20:29:15,224 StreamSession.java:512 - [Stream #ac1424f0-0cfd-11e7-b5a3-cb3016d1596d] Streaming error occurred {noformat} That code is: {noformat} 501 public void onError(Throwable e) 502 { 503 if (e instanceof SocketTimeoutException) 504 { 505 logger.error("[Stream #{}] Streaming socket timed out. This means the session peer stopped responding or " + 506 "is still processing received data. If there is no sign of failure in the other end or a very " + 507 "dense table is being transferred you may want to increase streaming_socket_timeout_in_ms " + 508 "property. Current value is {}ms.", planId(), DatabaseDescriptor.getStreamingSocketTimeout(), e); 509 } 510 else 511 { 512 logger.error("[Stream #{}] Streaming error occurred", planId(), e); 513 } 514 // send session failure message 515 if (handler.isOutgoingConnected()) 516 handler.sendMessage(new SessionFailedMessage()); 517 // fail session 518 closeSession(State.FAILED); 519 } {noformat} I'm at a loss as to why {{e}} is not being properly interpreted as {{Throwable}} and therefore not printing the stack trace. Thoughts? was (Author: wspeirs): [~spo...@gmail.com] hu, that's pretty curious then. I double-checked and Cassandra is using SLF4J v1.7.2. From my logs for example: {noformat} ERROR [STREAM-OUT-/X.X.X.X] 2017-03-19 20:29:15,224 StreamSession.java:512 - [Stream #ac1424f0-0cfd-11e7-b5a3-cb3016d1596d] Streaming error occurred {noformat} That code is: {noformat} 501 public void onError(Throwable e) 502 { 503 if (e instanceof SocketTimeoutException) 504 { 505 logger.error("[Stream #{}] Streaming socket timed out. This means the session peer stopped responding or " + 506 "is still processing received data. If there is no sign of failure in the other end or a very " + 507 "dense table is being transferred you may want to increase streaming_socket_timeout_in_ms " + 508 "property. Current value is {}ms.", planId(), DatabaseDescriptor.getStreamingSocketTimeout(), e); 509 } 510 else 511 { 512 logger.error("[Stream #{}] Streaming error occurred", planId(), e); 513 } 514 // send session failure message 515 if (handler.isOutgoingConnected()) 516 handler.sendMessage(new SessionFailedMessage()); 517 // fail session 518 closeSession(State.FAILED); 519 } {noformat} So I'm at a loss as to why {{e}} is not being properly interpreted as {{Throwable}} and therefore printing the stack trace. Thoughts? > Exception Stack not Printed as Intended in Error Logs > - > > Key: CASSANDRA-13368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13368 > Project: Cassandra > Issue Type: Bug >Reporter: William R. Speirs >Priority: Trivial > Labels: lhf > Fix For: 2.1.x > > Attachments: cassandra-13368-2.1.patch > > > There are a number of instances where it appears the programmer intended to > print a stack trace in an error message, but it is not actually being > printed. For example, in {{BlacklistedDirectories.java:54}}: > {noformat} > catch (Exception e) > { > JVMStabilityInspector.inspectThrowable(e); > logger.error("error registering MBean {}", MBEAN_NAME, e); > //Allow the server to start even if the bean can't be registered > } > {noformat} > The logger will use the second argument for the braces, but will ignore the > exception {{e}}. It would be helpful to have the stack traces of these > exceptions printed. I propose adding a second line that prints the full stack > trace: {{logger.error(e.getMessage(), e);}} > On the 2.1 branch, I found 8 instances of these types of messages: > {noformat} > db/BlacklistedDirectories.java:54:logger.error("error registering > MBean {}", MBEAN_NAME, e); > io/sstable/SSTableReader.java:512:logger.error("Corrupt sstable > {}; skipped", descriptor, e); > net/OutboundTcpConnection.java:228:logger.error("error > processing a message intended for {}", poolR
[jira] [Commented] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs
[ https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938645#comment-15938645 ] William R. Speirs commented on CASSANDRA-13368: --- [~spo...@gmail.com] hu, that's pretty curious then. I double-checked and Cassandra is using SLF4J v1.7.2. From my logs for example: {noformat} ERROR [STREAM-OUT-/X.X.X.X] 2017-03-19 20:29:15,224 StreamSession.java:512 - [Stream #ac1424f0-0cfd-11e7-b5a3-cb3016d1596d] Streaming error occurred {noformat} That code is: {noformat} 501 public void onError(Throwable e) 502 { 503 if (e instanceof SocketTimeoutException) 504 { 505 logger.error("[Stream #{}] Streaming socket timed out. This means the session peer stopped responding or " + 506 "is still processing received data. If there is no sign of failure in the other end or a very " + 507 "dense table is being transferred you may want to increase streaming_socket_timeout_in_ms " + 508 "property. Current value is {}ms.", planId(), DatabaseDescriptor.getStreamingSocketTimeout(), e); 509 } 510 else 511 { 512 logger.error("[Stream #{}] Streaming error occurred", planId(), e); 513 } 514 // send session failure message 515 if (handler.isOutgoingConnected()) 516 handler.sendMessage(new SessionFailedMessage()); 517 // fail session 518 closeSession(State.FAILED); 519 } {noformat} So I'm at a loss as to why {{e}} is not being properly interpreted as {{Throwable}} and therefore printing the stack trace. Thoughts? > Exception Stack not Printed as Intended in Error Logs > - > > Key: CASSANDRA-13368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13368 > Project: Cassandra > Issue Type: Bug >Reporter: William R. Speirs >Priority: Trivial > Labels: lhf > Fix For: 2.1.x > > Attachments: cassandra-13368-2.1.patch > > > There are a number of instances where it appears the programmer intended to > print a stack trace in an error message, but it is not actually being > printed. For example, in {{BlacklistedDirectories.java:54}}: > {noformat} > catch (Exception e) > { > JVMStabilityInspector.inspectThrowable(e); > logger.error("error registering MBean {}", MBEAN_NAME, e); > //Allow the server to start even if the bean can't be registered > } > {noformat} > The logger will use the second argument for the braces, but will ignore the > exception {{e}}. It would be helpful to have the stack traces of these > exceptions printed. I propose adding a second line that prints the full stack > trace: {{logger.error(e.getMessage(), e);}} > On the 2.1 branch, I found 8 instances of these types of messages: > {noformat} > db/BlacklistedDirectories.java:54:logger.error("error registering > MBean {}", MBEAN_NAME, e); > io/sstable/SSTableReader.java:512:logger.error("Corrupt sstable > {}; skipped", descriptor, e); > net/OutboundTcpConnection.java:228:logger.error("error > processing a message intended for {}", poolReference.endPoint(), e); > net/OutboundTcpConnection.java:314:logger.error("error > writing to {}", poolReference.endPoint(), e); > service/CassandraDaemon.java:231:logger.error("Exception in > thread {}", t, e); > service/CassandraDaemon.java:562:logger.error("error > registering MBean {}", MBEAN_NAME, e); > streaming/StreamSession.java:512:logger.error("[Stream #{}] > Streaming error occurred", planId(), e); > transport/Server.java:442:logger.error("Problem retrieving > RPC address for {}", endpoint, e); > {noformat} > And one where it'll print the {{toString()}} version of the exception: > {noformat} > db/Directories.java:689:logger.error("Could not calculate the > size of {}. {}", input, e); > {noformat} > I'm happy to create a patch for each branch, just need a little guidance on > how to do so. We're currently running 2.1 so I started there. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13226) StreamPlan for incremental repairs flushing memtables unnecessarily
[ https://issues.apache.org/jira/browse/CASSANDRA-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938617#comment-15938617 ] Paulo Motta commented on CASSANDRA-13226: - I think the idea behind flushing on stream was to send the most up-to-date data during bootstrap/rebuild/decommission/replace, but this doesn't apply to repair since you will end up overstreaming non-validated data as pointed out by [~brstgt]. In any case this minor improvement is subject to another ticket since this ticket is already closed. > StreamPlan for incremental repairs flushing memtables unnecessarily > --- > > Key: CASSANDRA-13226 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13226 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > Since incremental repairs are run against a fixed dataset, there's no need to > flush memtables when streaming for them. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
cassandra-builds git commit: Enable dtest-large target jobs
Repository: cassandra-builds Updated Branches: refs/heads/master a018b48b4 -> 160eecc93 Enable dtest-large target jobs Project: http://git-wip-us.apache.org/repos/asf/cassandra-builds/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra-builds/commit/160eecc9 Tree: http://git-wip-us.apache.org/repos/asf/cassandra-builds/tree/160eecc9 Diff: http://git-wip-us.apache.org/repos/asf/cassandra-builds/diff/160eecc9 Branch: refs/heads/master Commit: 160eecc93bedc9a035fb482499ee6c5db58eae47 Parents: a018b48 Author: Michael Shuler Authored: Thu Mar 23 10:38:58 2017 -0500 Committer: Michael Shuler Committed: Thu Mar 23 10:38:58 2017 -0500 -- jenkins-dsl/cassandra_job_dsl_seed.groovy | 7 ++- 1 file changed, 6 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra-builds/blob/160eecc9/jenkins-dsl/cassandra_job_dsl_seed.groovy -- diff --git a/jenkins-dsl/cassandra_job_dsl_seed.groovy b/jenkins-dsl/cassandra_job_dsl_seed.groovy index 515965d..1ca7108 100644 --- a/jenkins-dsl/cassandra_job_dsl_seed.groovy +++ b/jenkins-dsl/cassandra_job_dsl_seed.groovy @@ -7,6 +7,8 @@ def jobDescription = 'Apache Cassandra DSL-generated job - DSL git repo: https://git-wip-us.apache.org/repos/asf?p=cassandra-builds.git";>cassandra-builds' def jdkLabel = 'JDK 1.8 (latest)' def slaveLabel = 'cassandra' +// The dtest-large target needs to run on >=32G slaves, so we provide an "OR" list of those servers +def largeSlaveLabel = 'cassandra6||cassandra7' def mainRepo = 'https://git-wip-us.apache.org/repos/asf/cassandra.git' def buildsRepo = 'https://git.apache.org/cassandra-builds.git' def dtestRepo = 'https://github.com/riptano/cassandra-dtest.git' @@ -16,7 +18,7 @@ def cassandraBranches = ['cassandra-2.2', 'cassandra-3.0', 'cassandra-3.11', 'tr // Ant test targets def testTargets = ['test', 'test-all', 'test-burn', 'test-cdc', 'test-compression'] // Dtest test targets -def dtestTargets = ['dtest', 'dtest-novnode', 'dtest-offheap'] // dtest-large target exists, but no large servers to run on.. +def dtestTargets = ['dtest', 'dtest-novnode', 'dtest-offheap', 'dtest-large'] // @@ -294,6 +296,9 @@ cassandraBranches.each { job("${jobNamePrefix}-${targetName}") { disabled(false) using('Cassandra-template-dtest') +if (targetName == 'dtest-large') { +label(largeSlaveLabel) +} configure { node -> node / scm / branches / 'hudson.plugins.git.BranchSpec' / name(branchName) }
[jira] [Updated] (CASSANDRA-13340) Bugs handling range tombstones in the sstable iterators
[ https://issues.apache.org/jira/browse/CASSANDRA-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Branimir Lambov updated CASSANDRA-13340: Status: Ready to Commit (was: Patch Available) > Bugs handling range tombstones in the sstable iterators > --- > > Key: CASSANDRA-13340 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13340 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne >Priority: Critical > Fix For: 3.0.x, 3.11.x > > > There is 2 bugs in the way sstable iterators handle range tombstones: > # empty range tombstones can be returned due to a strict comparison that > shouldn't be. > # the sstable reversed iterator can actually return completely bogus results > when range tombstones are spanning multiple index blocks. > The 2 bugs are admittedly separate but as they both impact the same area of > code and are both range tombstones related, I suggest just fixing both here > (unless something really really mind). > Marking the ticket critical mostly for the 2nd bug: it can truly make use > return bad results on reverse queries. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13340) Bugs handling range tombstones in the sstable iterators
[ https://issues.apache.org/jira/browse/CASSANDRA-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938583#comment-15938583 ] Branimir Lambov commented on CASSANDRA-13340: - LGTM Nit: there are a few more inverted meanings: [in these two comments|https://github.com/pcmanus/cassandra/blob/94c0a9cca6b072e5f35c666c56e7ad1eb0577e7c/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java#L189] as well as [this {{lastOfPrevious}}|https://github.com/pcmanus/cassandra/blob/94c0a9cca6b072e5f35c666c56e7ad1eb0577e7c/src/java/org/apache/cassandra/db/columniterator/SSTableReversedIterator.java#L352]. > Bugs handling range tombstones in the sstable iterators > --- > > Key: CASSANDRA-13340 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13340 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne >Priority: Critical > Fix For: 3.0.x, 3.11.x > > > There is 2 bugs in the way sstable iterators handle range tombstones: > # empty range tombstones can be returned due to a strict comparison that > shouldn't be. > # the sstable reversed iterator can actually return completely bogus results > when range tombstones are spanning multiple index blocks. > The 2 bugs are admittedly separate but as they both impact the same area of > code and are both range tombstones related, I suggest just fixing both here > (unless something really really mind). > Marking the ticket critical mostly for the 2nd bug: it can truly make use > return bad results on reverse queries. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/66cd42ef Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/66cd42ef Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/66cd42ef Branch: refs/heads/trunk Commit: 66cd42ef615324e060004610dfad4ca1d4488f68 Parents: 6a8f150 3e95c5b Author: Ariel Weisberg Authored: Thu Mar 23 11:33:27 2017 -0400 Committer: Ariel Weisberg Committed: Thu Mar 23 11:33:27 2017 -0400 -- CHANGES.txt| 1 + test/conf/logback-test.xml | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/66cd42ef/CHANGES.txt -- diff --cc CHANGES.txt index 6897fb0,c58fad8..906ea64 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,51 -1,5 +1,52 @@@ +4.0 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360) + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359) + * Incremental repair not streaming correct sstables (CASSANDRA-13328) + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300) + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID functions (CASSANDRA-13132) + * Remove config option index_interval (CASSANDRA-10671) + * Reduce lock contention for collection types and serializers (CASSANDRA-13271) + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283) + * Avoid synchronized on prepareForRepair in ActiveRepairService (CASSANDRA-9292) + * Adds the ability to use uncompressed chunks in compressed files (CASSANDRA-10520) + * Don't flush sstables when streaming for incremental repair (CASSANDRA-13226) + * Remove unused method (CASSANDRA-13227) + * Fix minor bugs related to #9143 (CASSANDRA-13217) + * Output warning if user increases RF (CASSANDRA-13079) + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081) + * Add support for + and - operations on dates (CASSANDRA-11936) + * Fix consistency of incrementally repaired data (CASSANDRA-9143) + * Increase commitlog version (CASSANDRA-13161) + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425) + * Refactor ColumnCondition (CASSANDRA-12981) + * Parallelize streaming of different keyspaces (CASSANDRA-4663) + * Improved compactions metrics (CASSANDRA-13015) + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031) + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855) + * Thrift removal (CASSANDRA-5) + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716) + * Add column definition kind to dropped columns in schema (CASSANDRA-12705) + * Add (automate) Nodetool Documentation (CASSANDRA-12672) + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422) + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080) + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084) + * Require forceful decommission if number of nodes is less than replication factor (CASSANDRA-12510) + * Allow IN restrictions on column families with collections (CASSANDRA-12654) + * Log message size in trace message in OutboundTcpConnection (CASSANDRA-13028) + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029) + * Add mutation size and batch metrics (CASSANDRA-12649) + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999) + * Expose time spent waiting in thread pool queue (CASSANDRA-8398) + * Conditionally update index built status to avoid unnecessary flushes (CASSANDRA-12969) + * cqlsh auto completion: refactor definition of compaction strategy options (CASSANDRA-12946) + * Add support for arithmetic operators (CASSANDRA-11935) + * Add histogram for delay to deliver hints (CASSANDRA-13234) + + 3.11.0 + * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) * Improve SASI range iterator efficiency on intersection with an empty range (CASSANDRA-12915).
[2/3] cassandra git commit: Set true in test/conf/logback-test.xml Patch my Michael Kjellman; Reviewed by Ariel Weisberg for CASSANDRA-13317
Set true in test/conf/logback-test.xml Patch my Michael Kjellman; Reviewed by Ariel Weisberg for CASSANDRA-13317 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e95c5b0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e95c5b0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e95c5b0 Branch: refs/heads/trunk Commit: 3e95c5b0c574383e7da9a5e152b7be8aa122af9f Parents: f55cb88 Author: Ariel Weisberg Authored: Wed Mar 22 15:37:16 2017 -0400 Committer: Ariel Weisberg Committed: Thu Mar 23 11:31:47 2017 -0400 -- CHANGES.txt| 1 + test/conf/logback-test.xml | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e95c5b0/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index f4e48ff..c58fad8 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) * Improve SASI range iterator efficiency on intersection with an empty range (CASSANDRA-12915). http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e95c5b0/test/conf/logback-test.xml -- diff --git a/test/conf/logback-test.xml b/test/conf/logback-test.xml index addce22..48f93bc 100644 --- a/test/conf/logback-test.xml +++ b/test/conf/logback-test.xml @@ -68,6 +68,7 @@ 0 1024 + true
[1/3] cassandra git commit: Set true in test/conf/logback-test.xml Patch my Michael Kjellman; Reviewed by Ariel Weisberg for CASSANDRA-13317
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 f55cb88ab -> 3e95c5b0c refs/heads/trunk 6a8f15031 -> 66cd42ef6 Set true in test/conf/logback-test.xml Patch my Michael Kjellman; Reviewed by Ariel Weisberg for CASSANDRA-13317 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/3e95c5b0 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/3e95c5b0 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/3e95c5b0 Branch: refs/heads/cassandra-3.11 Commit: 3e95c5b0c574383e7da9a5e152b7be8aa122af9f Parents: f55cb88 Author: Ariel Weisberg Authored: Wed Mar 22 15:37:16 2017 -0400 Committer: Ariel Weisberg Committed: Thu Mar 23 11:31:47 2017 -0400 -- CHANGES.txt| 1 + test/conf/logback-test.xml | 1 + 2 files changed, 2 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e95c5b0/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index f4e48ff..c58fad8 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) * Improve SASI range iterator efficiency on intersection with an empty range (CASSANDRA-12915). http://git-wip-us.apache.org/repos/asf/cassandra/blob/3e95c5b0/test/conf/logback-test.xml -- diff --git a/test/conf/logback-test.xml b/test/conf/logback-test.xml index addce22..48f93bc 100644 --- a/test/conf/logback-test.xml +++ b/test/conf/logback-test.xml @@ -68,6 +68,7 @@ 0 1024 + true
[jira] [Commented] (CASSANDRA-13226) StreamPlan for incremental repairs flushing memtables unnecessarily
[ https://issues.apache.org/jira/browse/CASSANDRA-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938577#comment-15938577 ] Benjamin Roth commented on CASSANDRA-13226: --- That does not make sense to me. Why should be streamed more than requested? Sounds like waste of resources to me. Streaming more than a repair requires assumes that the system is still creating inconsistent data during the repair. > StreamPlan for incremental repairs flushing memtables unnecessarily > --- > > Key: CASSANDRA-13226 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13226 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > Since incremental repairs are run against a fixed dataset, there's no need to > flush memtables when streaming for them. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13247) index on udt built failed and no data could be inserted
[ https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938576#comment-15938576 ] Benjamin Lerer commented on CASSANDRA-13247: The patch looks good. Nice work :-). I just have two minor nits: * Can you remove the {{TODO}} comment. If you think that adding such a feature might be usefull it is probably better to open a JIRA to keep track of it. * If you want to check that a query will not return any results in the unit tests it is better to use {{assertEmpty}} as it is more explicite. No need to re-trigger CI for those changes. > index on udt built failed and no data could be inserted > --- > > Key: CASSANDRA-13247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13247 > Project: Cassandra > Issue Type: Bug >Reporter: mashudong >Assignee: Andrés de la Peña >Priority: Critical > Attachments: udt_index.txt > > > index on udt built failed and no data could be inserted > steps to reproduce: > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '2'} AND durable_writes = true; > CREATE TYPE ks1.address ( > street text, > city text, > zip_code int, > phones set > ); > CREATE TYPE ks1.fullname ( > firstname text, > lastname text > ); > CREATE TABLE ks1.users ( > id uuid PRIMARY KEY, > addresses map>, > age int, > direct_reports set>, > name fullname > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} > allow filtering; > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > WARN [ReadStage-2] 2017-02-22 16:59:33,392 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for > 'complex' types > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:134) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) > ~[apache-cas
[cassandra] Git Push Summary
Repository: cassandra Updated Branches: refs/heads/cassandra-13317-3.11 [deleted] cbb20cd35
[jira] [Commented] (CASSANDRA-13226) StreamPlan for incremental repairs flushing memtables unnecessarily
[ https://issues.apache.org/jira/browse/CASSANDRA-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938562#comment-15938562 ] Blake Eggleston commented on CASSANDRA-13226: - [~brstgt] I think the idea behind flushing on stream for full is that you'll be streaming even more recent data than when the merkle tree was generated, which there's really no harm in doing. > StreamPlan for incremental repairs flushing memtables unnecessarily > --- > > Key: CASSANDRA-13226 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13226 > Project: Cassandra > Issue Type: Bug >Reporter: Blake Eggleston >Assignee: Blake Eggleston >Priority: Minor > Fix For: 4.0 > > > Since incremental repairs are run against a fixed dataset, there's no need to > flush memtables when streaming for them. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[cassandra] Git Push Summary
Repository: cassandra Updated Branches: refs/heads/cassandra-13317-trunk [deleted] cb02a7255
[jira] [Commented] (CASSANDRA-13368) Exception Stack not Printed as Intended in Error Logs
[ https://issues.apache.org/jira/browse/CASSANDRA-13368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938551#comment-15938551 ] Stefan Podkowinski commented on CASSANDRA-13368: Thanks for having a look at this, William. I'd assume that you noticed this behavior from your local log files? I'm just a bit confused as the [SLF4J FAQ|https://www.slf4j.org/faq.html#paramException] tells that the described usage is perfectly valid. > Exception Stack not Printed as Intended in Error Logs > - > > Key: CASSANDRA-13368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13368 > Project: Cassandra > Issue Type: Bug >Reporter: William R. Speirs >Priority: Trivial > Labels: lhf > Fix For: 2.1.x > > Attachments: cassandra-13368-2.1.patch > > > There are a number of instances where it appears the programmer intended to > print a stack trace in an error message, but it is not actually being > printed. For example, in {{BlacklistedDirectories.java:54}}: > {noformat} > catch (Exception e) > { > JVMStabilityInspector.inspectThrowable(e); > logger.error("error registering MBean {}", MBEAN_NAME, e); > //Allow the server to start even if the bean can't be registered > } > {noformat} > The logger will use the second argument for the braces, but will ignore the > exception {{e}}. It would be helpful to have the stack traces of these > exceptions printed. I propose adding a second line that prints the full stack > trace: {{logger.error(e.getMessage(), e);}} > On the 2.1 branch, I found 8 instances of these types of messages: > {noformat} > db/BlacklistedDirectories.java:54:logger.error("error registering > MBean {}", MBEAN_NAME, e); > io/sstable/SSTableReader.java:512:logger.error("Corrupt sstable > {}; skipped", descriptor, e); > net/OutboundTcpConnection.java:228:logger.error("error > processing a message intended for {}", poolReference.endPoint(), e); > net/OutboundTcpConnection.java:314:logger.error("error > writing to {}", poolReference.endPoint(), e); > service/CassandraDaemon.java:231:logger.error("Exception in > thread {}", t, e); > service/CassandraDaemon.java:562:logger.error("error > registering MBean {}", MBEAN_NAME, e); > streaming/StreamSession.java:512:logger.error("[Stream #{}] > Streaming error occurred", planId(), e); > transport/Server.java:442:logger.error("Problem retrieving > RPC address for {}", endpoint, e); > {noformat} > And one where it'll print the {{toString()}} version of the exception: > {noformat} > db/Directories.java:689:logger.error("Could not calculate the > size of {}. {}", input, e); > {noformat} > I'm happy to create a patch for each branch, just need a little guidance on > how to do so. We're currently running 2.1 so I started there. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13340) Bugs handling range tombstones in the sstable iterators
[ https://issues.apache.org/jira/browse/CASSANDRA-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938545#comment-15938545 ] Sylvain Lebresne commented on CASSANDRA-13340: -- bq. Previous block sometimes means the next we'll iterate to (previous on disk), other times the previous we iterated. You're right, that's confusing. That said, I tried switching the newly introduced {{hasPrevious/NextBlock}} but at least to me that felt pretty confusing, so I decide to switch the existing usage. Basically feels more logical to me, though that's possibly somewhat personal. In any case it's consistent now, previous/next refer to the previous/next block we'll iterate. bq. {{skipLast/First}} have the same problem If you mean that first/last can be a tad confused when we're reading a block in one sense but iterating on its items afterward in the other sense, then I agree, but I didn't felt inverting those was really improving things. I did added {{IteratedItem}} and completed the comments so it's hopefully more clear. bq. but {{readCurrentBlock}} can do without the latter, can't it? Absolutely, removed the redundant argument, thanks. > Bugs handling range tombstones in the sstable iterators > --- > > Key: CASSANDRA-13340 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13340 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne >Priority: Critical > Fix For: 3.0.x, 3.11.x > > > There is 2 bugs in the way sstable iterators handle range tombstones: > # empty range tombstones can be returned due to a strict comparison that > shouldn't be. > # the sstable reversed iterator can actually return completely bogus results > when range tombstones are spanning multiple index blocks. > The 2 bugs are admittedly separate but as they both impact the same area of > code and are both range tombstones related, I suggest just fixing both here > (unless something really really mind). > Marking the ticket critical mostly for the 2nd bug: it can truly make use > return bad results on reverse queries. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
cassandra git commit: Merge branch 'cassandra-13317-3.11' into HEAD
Repository: cassandra Updated Branches: refs/heads/cassandra-13317-3.11 [created] cbb20cd35 refs/heads/cassandra-13317-trunk [created] cb02a7255 Merge branch 'cassandra-13317-3.11' into HEAD Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/cb02a725 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/cb02a725 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/cb02a725 Branch: refs/heads/cassandra-13317-trunk Commit: cb02a7255988ed4511b2ffc7a4daf09ce9d2447e Parents: 6a8f150 cbb20cd Author: Ariel Weisberg Authored: Thu Mar 23 10:47:15 2017 -0400 Committer: Ariel Weisberg Committed: Thu Mar 23 10:47:15 2017 -0400 -- CHANGES.txt| 2 ++ test/conf/logback-test.xml | 1 + 2 files changed, 3 insertions(+) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/cb02a725/CHANGES.txt -- diff --cc CHANGES.txt index 6897fb0,2af351c..5be5c61 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,50 -1,5 +1,52 @@@ +4.0 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360) + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359) + * Incremental repair not streaming correct sstables (CASSANDRA-13328) + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300) + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID functions (CASSANDRA-13132) + * Remove config option index_interval (CASSANDRA-10671) + * Reduce lock contention for collection types and serializers (CASSANDRA-13271) + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283) + * Avoid synchronized on prepareForRepair in ActiveRepairService (CASSANDRA-9292) + * Adds the ability to use uncompressed chunks in compressed files (CASSANDRA-10520) + * Don't flush sstables when streaming for incremental repair (CASSANDRA-13226) + * Remove unused method (CASSANDRA-13227) + * Fix minor bugs related to #9143 (CASSANDRA-13217) + * Output warning if user increases RF (CASSANDRA-13079) + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081) + * Add support for + and - operations on dates (CASSANDRA-11936) + * Fix consistency of incrementally repaired data (CASSANDRA-9143) + * Increase commitlog version (CASSANDRA-13161) + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425) + * Refactor ColumnCondition (CASSANDRA-12981) + * Parallelize streaming of different keyspaces (CASSANDRA-4663) + * Improved compactions metrics (CASSANDRA-13015) + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031) + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855) + * Thrift removal (CASSANDRA-5) + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716) + * Add column definition kind to dropped columns in schema (CASSANDRA-12705) + * Add (automate) Nodetool Documentation (CASSANDRA-12672) + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422) + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080) + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084) + * Require forceful decommission if number of nodes is less than replication factor (CASSANDRA-12510) + * Allow IN restrictions on column families with collections (CASSANDRA-12654) + * Log message size in trace message in OutboundTcpConnection (CASSANDRA-13028) + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029) + * Add mutation size and batch metrics (CASSANDRA-12649) + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999) + * Expose time spent waiting in thread pool queue (CASSANDRA-8398) + * Conditionally update index built status to avoid unnecessary flushes (CASSANDRA-12969) + * cqlsh auto completion: refactor definition of compaction strategy options (CASSANDRA-12946) + * Add support for arithmetic operators (CASSANDRA-11935) + * Add histogram for delay to deliver hints (CASSANDRA-13234) + + + 3.11.1 + * Default logging we ship will incorrectly print "?:?" for "%F:%L" pattern (CASSANDRA-13317) 3.11.0 * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326)
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938462#comment-15938462 ] Ariel Weisberg commented on CASSANDRA-13370: I think we should remove the seed anyways so that subsequent usage of secure random doesn't also fail only on OX X. These tests have been failing for a long time without being fixed. > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Attachments: 13370-trunk.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937893#comment-15937893 ] Alex Petrov edited comment on CASSANDRA-13113 at 3/23/17 2:34 PM: -- I've investigated a bit deeper. Although in my opinion it's kind of a regression, even if it's not super-serious, but it has some user-facing implications. I've ran {{bisect}} and narrowed it down to [this commit|https://github.com/apache/cassandra/commit/c607d76413be81a0e125c5780e068d7ab7594612] Checking logs reveals that before this commit, we had error messages in the form of: {code} Error from server: code=0100 [Bad credentials] message="Error during authentication of user cassandra : org.apache.cassandra.exceptions.UnavailableException: Cannot achieve consistency level QUORUM" {code} After, it's changed to {code} Error from server: code= [Server error] message="java.lang.RuntimeException: org.apache.cassandra.exceptions.UnavailableException: Cannot achieve consistency level QUORUM" {code} I've checked underlying code and it looks like Guava was doing some unwrapping in case of runtime exceptions on [cache loading|http://grepcode.com/file/repo1.maven.org/maven2/com.google.guava/guava/11.0/com/google/common/cache/LocalCache.java#2234] (might be a wrong guava version but you get the idea). Previously, we had to unwrap the {{UncheckedExecutionException}} in order to extract cause and [turn it into authentication exception|https://github.com/ifesdjeen/cassandra/commit/c607d76413be81a0e125c5780e068d7ab7594612#diff-ef1e335e8d51911f09bcc735b0632c5cL97], in order to trigger a correct error code. Now, we don't have to since exception isn't un/rewrapped. The stack trace of the other exception that was happening and causing {{Server error}} instead of {{Bad Credentials}} was {code} at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:487) [main/:na] at org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310) [main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:271) [main/:na] at org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) [main/:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.39.Final.jar:4.0.39.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) [netty-all-4.0.39.Final.jar:4.0.39.Final] at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) [netty-all-4.0.39.Final.jar:4.0.39.Final] at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357) [netty-all-4.0.39.Final.jar:4.0.39.Final] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) [main/:na] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [main/:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121] {code} Consequently, I have removed guava-specific exception rewrapping. The other places (JMX permissions cache, Credentials cache, Passwords cache and Permissions cache) look fine, with an exception with Permission cache where we do re-wrap an exception but that doesn't change bubbling/error code. |[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:13367-trunk]|[dtest|https://cassci.datastax.com/job/ifesdjeen-13367-trunk-dtest/]|[testall|https://cassci.datastax.com/job/ifesdjeen-13367-trunk-testall/]| was (Author: ifesdjeen): I've investigated a bit deeper. Although in my opinion it's kind of a regression, even if it's not super-serious, but it has some user-facing implications. I've ran {{bisect}} and narrowed it down to [this commit|https://github.com/apache/cassandra/commit/c607d76413be81a0e125c5780e068d7ab7594612] Checking logs reveals that before this commit, we had error messages in the form of: {code} Error from server: code=0100 [Bad credentials] message="Error during authentication of user cassandra : org.apache.cassandra.exceptions.UnavailableException: Cannot achieve consistency level QUORUM" {code} After, it's changed to {code} Error from server: code= [Server error] message="java.lang.RuntimeException: org.apache.cassandra.exceptions.UnavailableException: Cannot achieve consistency level QUORUM" {code} I've checked underly
[jira] [Commented] (CASSANDRA-12151) Audit logging for database activity
[ https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938440#comment-15938440 ] nhanpt14 commented on CASSANDRA-12151: -- Could we have Data Auditing in Apache Cassandra like DataStax Enterprise? I hope it will be support in next release. > Audit logging for database activity > --- > > Key: CASSANDRA-12151 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12151 > Project: Cassandra > Issue Type: New Feature >Reporter: stefan setyadi > Fix For: 3.11.x > > Attachments: 12151.txt > > > we would like a way to enable cassandra to log database activity being done > on our server. > It should show username, remote address, timestamp, action type, keyspace, > column family, and the query statement. > it should also be able to log connection attempt and changes to the > user/roles. > I was thinking of making a new keyspace and insert an entry for every > activity that occurs. > Then It would be possible to query for specific activity or a query targeting > a specific keyspace and column family. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13247) index on udt built failed and no data could be inserted
[ https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938407#comment-15938407 ] Andrés de la Peña commented on CASSANDRA-13247: --- ||[trunk|https://github.com/apache/cassandra/compare/trunk...adelapena:13247-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13247-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13247-trunk-dtest/]| ||[3.11|https://github.com/apache/cassandra/compare/cassandra-3.11...adelapena:13247-3.11]|[utests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13247-3.11-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/adelapena/job/adelapena-13247-3.11-dtest/]| > index on udt built failed and no data could be inserted > --- > > Key: CASSANDRA-13247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13247 > Project: Cassandra > Issue Type: Bug >Reporter: mashudong >Assignee: Andrés de la Peña >Priority: Critical > Attachments: udt_index.txt > > > index on udt built failed and no data could be inserted > steps to reproduce: > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '2'} AND durable_writes = true; > CREATE TYPE ks1.address ( > street text, > city text, > zip_code int, > phones set > ); > CREATE TYPE ks1.fullname ( > firstname text, > lastname text > ); > CREATE TABLE ks1.users ( > id uuid PRIMARY KEY, > addresses map>, > age int, > direct_reports set>, > name fullname > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} > allow filtering; > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > WARN [ReadStage-2] 2017-02-22 16:59:33,392 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for > 'complex' types > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:138) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.(ReadResponse.java:
[jira] [Created] (CASSANDRA-13372) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test
Sean McCarthy created CASSANDRA-13372: - Summary: dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test Key: CASSANDRA-13372 URL: https://issues.apache.org/jira/browse/CASSANDRA-13372 Project: Cassandra Issue Type: Bug Components: Testing Reporter: Sean McCarthy Attachments: node1_debug.log, node1_gc.log, node1.log, node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, node3.log example failure: http://cassci.datastax.com/job/trunk_dtest/1525/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test {code} Error Message 'Repaired at: 0' unexpectedly found in 'SSTable: /tmp/dtest-qoNeEc/test/node1/data0/keyspace1/standard1-3674b7a00e7911e78a4625bec3430063/na-4-big\nPartitioner: org.apache.cassandra.dht.Murmur3Partitioner\nBloom Filter FP chance: 0.01\nMinimum timestamp: 1490129948985001\nMaximum timestamp: 1490129952789002\nSSTable min local deletion time: 2147483647\nSSTable max local deletion time: 2147483647\nCompressor: -\nTTL min: 0\nTTL max: 0\nFirst token: -9222701292667950301 (key=5032394c323239385030)\nLast token: -3062233317334255711 (key=3032503434364f4e4f30)\nEstimated droppable tombstones: 0.0\nSSTable Level: 0\nRepaired at: 0\nPending repair: 45a396b0-0e79-11e7-841e-2d88b3d470cf\nReplay positions covered: {CommitLogPosition(segmentId=1490129923946, position=42824)=CommitLogPosition(segmentId=1490129923946, position=2605214)}\ntotalColumnsSet: 16550\ntotalRows: 3310\nEstimated tombstone drop times:\nCount Row SizeCell Count\n1 0 0\n2 0 0\n3 0 0\n4 0 0\n5 0 3310\n6 0 0\n7 0 0\n8 0 0\n10 0 0\n12 0 0\n14 0 0\n17 0 0\n20 0 0\n24 0 0\n29 0 0\n35 0 0\n42 0 0\n50 0 0\n60 0 0\n72 0 0\n86 0 0\n1030 0\n1240 0\n149 0 0\n1790 0\n215 1 0\n258 3309 0\n3100 0\n372 0 0\n4460 0\n535 0 0\n6420 0\n7700 0\n924 0 0\n1109 0 0\n1331 0 0\n1597 0 0\n1916 0 0\n2299 0 0\n2759 0 0\n3311 0 0\n3973 0 0\n4768 0 0\n5722 0 0\n6866 0 0\n8239 0 0\n9887 0 0\n11864 0 0\n14237 0 0\n17084 0 0\n20501 0 0\n24601 0 0\n29521 0 0\n35425 0 0\n42510 0 0\n51012 0 0\n61214 0 0\n73457 0 0\n88148 0 0\n105778 0 0\n126934 0 0\n152321 0 0\n182785 0 0\n219342 0 0\n263210 0 0\n315852 0 0\n379022 0 0\n454826
[jira] [Comment Edited] (CASSANDRA-13247) index on udt built failed and no data could be inserted
[ https://issues.apache.org/jira/browse/CASSANDRA-13247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15936928#comment-15936928 ] Andrés de la Peña edited comment on CASSANDRA-13247 at 3/23/17 1:01 PM: I'm working on an initial version of the patch [here|https://github.com/apache/cassandra/compare/trunk...adelapena:13247-trunk]. The patch makes CQL validation layer to forbid {{SELECT}} restrictions and {{CREATE INDEX}} over non-frozen UDT columns, which are not supported operations. Both operations are still perfectly possible with frozen UDTs. was (Author: adelapena): I'm working on an initial version of the path [here|https://github.com/apache/cassandra/compare/trunk...adelapena:13247-trunk]. The patch makes CQL validation layer to forbid {{SELECT}} restrictions and {{CREATE INDEX}} over non-frozen UDT columns, which are not supported operations. Both operations are still perfectly possible with frozen UDTs. > index on udt built failed and no data could be inserted > --- > > Key: CASSANDRA-13247 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13247 > Project: Cassandra > Issue Type: Bug >Reporter: mashudong >Assignee: Andrés de la Peña >Priority: Critical > Attachments: udt_index.txt > > > index on udt built failed and no data could be inserted > steps to reproduce: > CREATE KEYSPACE ks1 WITH replication = {'class': 'SimpleStrategy', > 'replication_factor': '2'} AND durable_writes = true; > CREATE TYPE ks1.address ( > street text, > city text, > zip_code int, > phones set > ); > CREATE TYPE ks1.fullname ( > firstname text, > lastname text > ); > CREATE TABLE ks1.users ( > id uuid PRIMARY KEY, > addresses map>, > age int, > direct_reports set>, > name fullname > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} > AND comment = '' > AND compaction = {'class': > 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32', 'min_threshold': '4'} > AND compression = {'chunk_length_in_kb': '64', 'class': > 'org.apache.cassandra.io.compress.LZ4Compressor'} > AND crc_check_chance = 1.0 > AND dclocal_read_repair_chance = 0.1 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99PERCENTILE'; > SELECT * FROM users where name = { firstname : 'first' , lastname : 'last'} > allow filtering; > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > WARN [ReadStage-2] 2017-02-22 16:59:33,392 > AbstractLocalAwareExecutorService.java:169 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Only CONTAINS and CONTAINS_KEY are supported for > 'complex' types > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:683) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BaseRows.applyOne(BaseRows.java:120) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.transform.BaseRows.add(BaseRows.java:110) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.UnfilteredRows.add(UnfilteredRows.java:41) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.add(Transformation.java:162) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.Transformation.apply(Transformation.java:128) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:292) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:289) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:145) > ~[apache-cassandra
[jira] [Comment Edited] (CASSANDRA-13246) Querying by secondary index on collection column returns NullPointerException sometimes
[ https://issues.apache.org/jira/browse/CASSANDRA-13246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15932338#comment-15932338 ] Mikkel Andersen edited comment on CASSANDRA-13246 at 3/23/17 12:59 PM: --- Sorry Benjamin - did not mean to cause problems... could you send me the link to where the workflow is described? On Mon, Mar 20, 2017 at 9:40 AM, Benjamin Lerer (JIRA) was (Author: mikkel.t.ander...@gmail.com): Sorry Benjamin - did not mean to cause problems... could you send me the link to where the workflow is described? On Mon, Mar 20, 2017 at 9:40 AM, Benjamin Lerer (JIRA) -- Venlig Hilsen Mikkel T. Andersen Skjoldborgvej 8 7100 Vejle Mobil: +45 40 26 79 26 > Querying by secondary index on collection column returns NullPointerException > sometimes > --- > > Key: CASSANDRA-13246 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13246 > Project: Cassandra > Issue Type: Bug > Components: Local Write-Read Paths > Environment: [cqlsh 5.0.1 | Cassandra 3.7 | CQL spec 3.4.2 | Native > protocol v4] > One cassandra node up, with consistency ONE >Reporter: hochung > Labels: easyfix > Fix For: 3.0.13, 3.11.0, 4.0 > > Attachments: cassandra-13246.diff > > > Not sure if this is the absolute minimal case that produces the bug, but here > are the steps for reproducing. > 1. Create table > {code} > CREATE TABLE test ( > id text, > ck1 text, > ck2 text, > static_value text static, > set_value set, > primary key (id, ck1, ck2) > ); > {code} > 2. Create secondary indices on the clustering columns, static column, and > collection column > {code} > create index on test (set_value); > create index on test (static_value); > create index on test (ck1); > create index on test (ck2); > {code} > 3. Insert a null value into the `set_value` column > {code} > insert into test (id, ck1, ck2, static_value, set_value) values ('id', > 'key1', 'key2', 'static', {'one', 'two'} ); > {code} > Sanity check: > {code} > select * from test; > id | ck1 | ck2 | static_value | set_value > +--+--+--+ > id | key1 | key2 | static | {'one', 'two'} > {code} > 4. Set the set_value to be empty > {code} > update test set set_value = {} where id = 'id' and ck1 = 'key1' and ck2 = > 'key2'; > {code} > 5. Make a select query that uses `CONTAINS` in the `set_value` column > {code} > select * from test where ck2 = 'key2' and static_value = 'static' and > set_value contains 'one' allow filtering; > {code} > Here we get a ReadFailure: > {code} > ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read] > message="Operation failed - received 0 responses and 1 failures" > info={'failures': 1, 'received_responses': 0, 'required_responses': 1, > 'consistency': 'ONE'} > {code} > Logs show a NullPointerException > {code} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2470) > ~[apache-cassandra-3.7.jar:3.7] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_101] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > ~[apache-cassandra-3.7.jar:3.7] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [apache-cassandra-3.7.jar:3.7] > at > org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [apache-cassandra-3.7.jar:3.7] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > Caused by: java.lang.NullPointerException: null > at > org.apache.cassandra.db.filter.RowFilter$SimpleExpression.isSatisfiedBy(RowFilter.java:720) > ~[apache-cassandra-3.7.jar:3.7] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToRow(RowFilter.java:303) > ~[apache-cassandra-3.7.jar:3.7] > at > org.apache.cassandra.db.transform.BaseRows.hasNext(BaseRows.java:120) > ~[apache-cassandra-3.7.jar:3.7] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:293) > ~[apache-cassandra-3.7.jar:3.7] > at > org.apache.cassandra.db.filter.RowFilter$CQLFilter$1IsSatisfiedFilter.applyToPartition(RowFilter.java:281) > ~[apache-cassandra-3.7.jar:3.7] > at > org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76) > ~[apache-cassandra-3.7.jar:3.7] > at > org.apache.
[jira] [Created] (CASSANDRA-13371) Remove legacy authz tables support
Stefan Podkowinski created CASSANDRA-13371: -- Summary: Remove legacy authz tables support Key: CASSANDRA-13371 URL: https://issues.apache.org/jira/browse/CASSANDRA-13371 Project: Cassandra Issue Type: Improvement Reporter: Stefan Podkowinski Starting with Cassandra 3.0, we include support for converting pre CASSANDRA-7653 user permission tables, until they will be dropped by the operator. Converting permissions happens by simply copying all of them from {{permissions}} -> {{role_permissions}}, until the {{permissions}} table has been dropped. Upgrading to 4.0 will only be possible from 3.0 upwards, so I think it's safe to assume that the new permissions table has already been populated, whether the old table was dropped or not. Therefor I'd suggest to just get rid of the legacy support. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13339) java.nio.BufferOverflowException: null
[ https://issues.apache.org/jira/browse/CASSANDRA-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938151#comment-15938151 ] Chris Richards commented on CASSANDRA-13339: I added some debug to see what was causing this : the size of the serialized mutation is changing between when it was originally calculated and the point that the write occurs. This seems to occur at any location within the CommitLog file (not just near the end). ERROR [MutationStage-1] 2017-03-23 08:42:15,184 CommitLog.java:301 - Caught buffer overflow exception ERROR [MutationStage-1] 2017-03-23 08:42:15,184 CommitLog.java:302 - totalSize 73, size 61 ERROR [MutationStage-1] 2017-03-23 08:42:15,184 CommitLog.java:303 - buffer: position 22737351, limit 22737351, capacity 33554432 ERROR [MutationStage-1] 2017-03-23 08:42:15,184 CommitLog.java:306 - recomputed size 106 where recomputed size is the value of Mutation.serializer.serializedSize(mutation, MessagingService.current_version); after the exception has been thrown and caught. I assume therefore that the mutation is changing between when the serialized size was calculated and when it serialized - I have a new build that will try and show more information about the mutation when this occurs to see if this sheds any light on what is happening. > java.nio.BufferOverflowException: null > -- > > Key: CASSANDRA-13339 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13339 > Project: Cassandra > Issue Type: Bug >Reporter: Chris Richards > > I'm seeing the following exception running Cassandra 3.9 (with Netty updated > to 4.1.8.Final) running on a 2 node cluster. It would have been processing > around 50 queries/second at the time (mixture of > inserts/updates/selects/deletes) : there's a collection of tables (some with > counters some without) and a single materialized view. > ERROR [MutationStage-4] 2017-03-15 22:50:33,052 StorageProxy.java:1353 - > Failed to apply mutation locally : {} > java.nio.BufferOverflowException: null > at > org.apache.cassandra.io.util.DataOutputBufferFixed.doFlush(DataOutputBufferFixed.java:52) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.writeUnsignedVInt(BufferedDataOutputStreamPlus.java:262) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.rows.EncodingStats$Serializer.serialize(EncodingStats.java:233) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.SerializationHeader$Serializer.serializeForMessaging(SerializationHeader.java:380) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:122) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:89) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.partitions.PartitionUpdate$PartitionUpdateSerializer.serialize(PartitionUpdate.java:790) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.db.Mutation$MutationSerializer.serialize(Mutation.java:393) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.commitlog.CommitLog.add(CommitLog.java:279) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:493) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.Keyspace.apply(Keyspace.java:396) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.Mutation.applyFuture(Mutation.java:215) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:227) > ~[apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.db.Mutation.apply(Mutation.java:241) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$8.runMayThrow(StorageProxy.java:1347) > ~[apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.service.StorageProxy$LocalMutationRunnable.run(StorageProxy.java:2539) > [apache-cassandra-3.9.jar:3.9] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_121] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > [apache-cassandra-3.9.jar:3.9] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [apache-cassandra-3.9.jar:3.9] > at org.apache.cassandra.concurrent.SEPWorker.run(S
[jira] [Commented] (CASSANDRA-13340) Bugs handling range tombstones in the sstable iterators
[ https://issues.apache.org/jira/browse/CASSANDRA-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938125#comment-15938125 ] Branimir Lambov commented on CASSANDRA-13340: - I think this is correct but I find the terminology confusing. Previous block [sometimes|https://github.com/pcmanus/cassandra/commit/75a0c57b3c130343dd8068e32ef096e3057191e9#diff-68d265d33b5303cd50645cb4a7eba569R309] means the next we'll iterate to (previous on disk), [other times|https://github.com/pcmanus/cassandra/commit/75a0c57b3c130343dd8068e32ef096e3057191e9#diff-68d265d33b5303cd50645cb4a7eba569R327] the previous we iterated. I'd prefer a consistent meaning for these; it looks like the new code is at odds with the previous convention on these, so the meanings of {{hasPrevious/NextBlock}} need reversing. {{skipLast/First}} have the same problem, I'd at least add {{IteratedItem}} to their names to add a little clarity. [{{canIncludeSliceStart/End}}|https://github.com/pcmanus/cassandra/commit/75a0c57b3c130343dd8068e32ef096e3057191e9#diff-68d265d33b5303cd50645cb4a7eba569R334] appear to have exactly the opposite meaning of {{hasPrevious/NextBlock}}. I can see {{loadFromDisk}} needs separate booleans for the difference in the non-indexed case, but {{readCurrentBlock}} can do without the latter, can't it? > Bugs handling range tombstones in the sstable iterators > --- > > Key: CASSANDRA-13340 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13340 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne >Priority: Critical > Fix For: 3.0.x, 3.11.x > > > There is 2 bugs in the way sstable iterators handle range tombstones: > # empty range tombstones can be returned due to a strict comparison that > shouldn't be. > # the sstable reversed iterator can actually return completely bogus results > when range tombstones are spanning multiple index blocks. > The 2 bugs are admittedly separate but as they both impact the same area of > code and are both range tombstones related, I suggest just fixing both here > (unless something really really mind). > Marking the ticket critical mostly for the 2nd bug: it can truly make use > return bad results on reverse queries. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13360) Upgrade junit from 4.6 to 4.12
[ https://issues.apache.org/jira/browse/CASSANDRA-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-13360: - Resolution: Fixed Fix Version/s: (was: 4.x) 4.0 Status: Resolved (was: Patch Available) Alright - CI looks good so far. No unit test failures. ||trunk|[branch|https://github.com/apache/cassandra/compare/trunk...snazy:13360-junit4.12-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13360-junit4.12-trunk-testall/lastSuccessfulBuild/]|[dtest|http://cassci.datastax.com/view/Dev/view/snazy/job/snazy-13360-junit4.12-trunk-dtest/lastSuccessfulBuild/] Committed as [6a8f15031569bcf8adf5344db9c701b1a6d2a802|https://github.com/apache/cassandra/commit/6a8f15031569bcf8adf5344db9c701b1a6d2a802] to [trunk|https://github.com/apache/cassandra/tree/trunk] Thanks! > Upgrade junit from 4.6 to 4.12 > -- > > Key: CASSANDRA-13360 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13360 > Project: Cassandra > Issue Type: Improvement > Components: Libraries, Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Labels: test > Fix For: 4.0 > > Attachments: 13360-3.0.txt, 13360-trunk.txt > > > Current stable release is 4.12: [released in > 2014|https://github.com/junit-team/junit4/releases]. > We can levege more test features like Rule, TemporaryFolder, Parameterized > Tests, Theories, etc. Here are the release notes: > https://github.com/junit-team/junit4/tree/master/doc > Junit-4.6 is a very old version [released in > 2009|https://github.com/junit-team/junit4/releases?after=r4.7]. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
cassandra git commit: Upgrade junit from 4.6 to 4.12
Repository: cassandra Updated Branches: refs/heads/trunk a87b15d1d -> 6a8f15031 Upgrade junit from 4.6 to 4.12 patch by Jay Zhuang; reviewed by Robert Stupp for CASSANDRA-13360 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6a8f1503 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6a8f1503 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6a8f1503 Branch: refs/heads/trunk Commit: 6a8f15031569bcf8adf5344db9c701b1a6d2a802 Parents: a87b15d Author: Jay Zhuang Authored: Thu Mar 23 11:00:05 2017 +0100 Committer: Robert Stupp Committed: Thu Mar 23 11:00:05 2017 +0100 -- CHANGES.txt | 1 + build.xml | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6a8f1503/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c1d5e94..6897fb0 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 4.0 + * Upgrade junit from 4.6 to 4.12 (CASSANDRA-13360) * Cleanup ParentRepairSession after repairs (CASSANDRA-13359) * Incremental repair not streaming correct sstables (CASSANDRA-13328) * Upgrade the jna version to 4.3.0 (CASSANDRA-13300) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6a8f1503/build.xml -- diff --git a/build.xml b/build.xml index 058b879..73d3051 100644 --- a/build.xml +++ b/build.xml @@ -383,7 +383,7 @@ - +
[jira] [Commented] (CASSANDRA-13370) unittest CipherFactoryTest failed on MacOS
[ https://issues.apache.org/jira/browse/CASSANDRA-13370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938026#comment-15938026 ] Stefan Podkowinski commented on CASSANDRA-13370: Jay, shouldn't simply removing the seed be enough? Do you still have to remove the egd path to get rid of the error? > unittest CipherFactoryTest failed on MacOS > -- > > Key: CASSANDRA-13370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13370 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Jay Zhuang >Assignee: Jay Zhuang >Priority: Minor > Attachments: 13370-trunk.txt > > > Seems like MacOS(El Capitan) doesn't allow writing to {{/dev/urandom}}: > {code} > $ echo 1 > /dev/urandom > echo: write error: operation not permitted > {code} > Which is causing CipherFactoryTest failed: > {code} > $ ant test -Dtest.name=CipherFactoryTest > ... > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest > [junit] Testsuite: org.apache.cassandra.security.CipherFactoryTest Tests > run: 7, Failures: 0, Errors: 7, Skipped: 0, Time elapsed: 2.184 sec > [junit] > [junit] Testcase: > buildCipher_SameParams(org.apache.cassandra.security.CipherFactoryTest): > Caused an ERROR > [junit] setSeed() failed > [junit] java.security.ProviderException: setSeed() failed > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:472) > [junit] at > sun.security.provider.NativePRNG$RandomIO.access$300(NativePRNG.java:331) > [junit] at > sun.security.provider.NativePRNG.engineSetSeed(NativePRNG.java:214) > [junit] at > java.security.SecureRandom.getDefaultPRNG(SecureRandom.java:209) > [junit] at java.security.SecureRandom.(SecureRandom.java:190) > [junit] at > org.apache.cassandra.security.CipherFactoryTest.setup(CipherFactoryTest.java:50) > [junit] Caused by: java.io.IOException: Operation not permitted > [junit] at java.io.FileOutputStream.writeBytes(Native Method) > [junit] at java.io.FileOutputStream.write(FileOutputStream.java:313) > [junit] at > sun.security.provider.NativePRNG$RandomIO.implSetSeed(NativePRNG.java:470) > ... > {code} > I'm able to reproduce the issue on two Mac machines. But not sure if it's > affecting all other developers. > {{-Djava.security.egd=file:/dev/urandom}} was introduced in: > CASSANDRA-9581 > I would suggest to revert the > [change|https://github.com/apache/cassandra/commit/ae179e45327a133248c06019f87615c9cf69f643] > as {{pig-test}} is removed ([pig is no longer > supported|https://github.com/apache/cassandra/commit/56cfc6ea35d1410f2f5a8ae711ae33342f286d79]). > Or adding a condition for MacOS in build.xml. > [~aweisberg] [~jasobrown] any thoughts? -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound
[ https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937989#comment-15937989 ] Sylvain Lebresne commented on CASSANDRA-13366: -- Committed thanks (I'm still keeping the "write a dtest" on my TODO list, but it may took me a few days to get to it and I don't see the point delaying the commit given this is pretty simple one). bq. because it is unreliable in the presence of range tombstones and compact tables Correct, it was unused and unsafe, so felt safer to just get rid of it. > Possible AssertionError in UnfilteredRowIteratorWithLowerBound > -- > > Key: CASSANDRA-13366 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13366 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.11.0 > > > In the code introduced by CASSANDRA-8180, we build a lower bound for a > partition (sometimes) based on the min clustering values of the stats file. > We can't do that if the sstable has and range tombston marker and the code > does check that this is the case, but unfortunately the check is done using > the stats {{minLocalDeletionTime}} but that value isn't populated properly in > pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up > getting an exception like > {noformat} > WARN [ReadStage-2] 2017-03-20 13:29:39,165 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, > -9223372036854775808, -9223372036854775808) ]is bigger than first returned > value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable > /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122) > {noformat} > and this until the sstable is upgraded. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound
[ https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-13366: - Fix Version/s: (was: 3.11.x) 3.11.0 > Possible AssertionError in UnfilteredRowIteratorWithLowerBound > -- > > Key: CASSANDRA-13366 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13366 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.11.0 > > > In the code introduced by CASSANDRA-8180, we build a lower bound for a > partition (sometimes) based on the min clustering values of the stats file. > We can't do that if the sstable has and range tombston marker and the code > does check that this is the case, but unfortunately the check is done using > the stats {{minLocalDeletionTime}} but that value isn't populated properly in > pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up > getting an exception like > {noformat} > WARN [ReadStage-2] 2017-03-20 13:29:39,165 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, > -9223372036854775808, -9223372036854775808) ]is bigger than first returned > value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable > /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122) > {noformat} > and this until the sstable is upgraded. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13366) Possible AssertionError in UnfilteredRowIteratorWithLowerBound
[ https://issues.apache.org/jira/browse/CASSANDRA-13366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anonymous updated CASSANDRA-13366: -- Resolution: Fixed Status: Resolved (was: Ready to Commit) > Possible AssertionError in UnfilteredRowIteratorWithLowerBound > -- > > Key: CASSANDRA-13366 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13366 > Project: Cassandra > Issue Type: Bug >Reporter: Sylvain Lebresne >Assignee: Sylvain Lebresne > Fix For: 3.11.x > > > In the code introduced by CASSANDRA-8180, we build a lower bound for a > partition (sometimes) based on the min clustering values of the stats file. > We can't do that if the sstable has and range tombston marker and the code > does check that this is the case, but unfortunately the check is done using > the stats {{minLocalDeletionTime}} but that value isn't populated properly in > pre-3.0. This means that if you upgrade from 2.1/2.2 to 3.4+, you may end up > getting an exception like > {noformat} > WARN [ReadStage-2] 2017-03-20 13:29:39,165 > AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread > Thread[ReadStage-2,5,main]: {} > java.lang.AssertionError: Lower bound [INCL_START_BOUND(Foo, > -9223372036854775808, -9223372036854775808) ]is bigger than first returned > value [Marker INCL_START_BOUND(Foo)@1490013810540999] for sstable > /var/lib/cassandra/data/system/size_estimates-618f817b005f3678b8a453f3930b8e86/system-size_estimates-ka-1-Data.db > at > org.apache.cassandra.db.rows.UnfilteredRowIteratorWithLowerBound.computeNext(UnfilteredRowIteratorWithLowerBound.java:122) > {noformat} > and this until the sstable is upgraded. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[3/3] cassandra git commit: Merge branch 'cassandra-3.11' into trunk
Merge branch 'cassandra-3.11' into trunk * cassandra-3.11: Possible AssertionError in UnfilteredRowIteratorWithLowerBound Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a87b15d1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a87b15d1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a87b15d1 Branch: refs/heads/trunk Commit: a87b15d1d6c42f4247c84b460ed39899d8813a6f Parents: 8b74ae4 f55cb88 Author: Sylvain Lebresne Authored: Thu Mar 23 10:29:59 2017 +0100 Committer: Sylvain Lebresne Committed: Thu Mar 23 10:29:59 2017 +0100 -- CHANGES.txt | 1 + .../db/SinglePartitionReadCommand.java | 4 +-- .../db/compaction/CompactionController.java | 2 +- .../UnfilteredRowIteratorWithLowerBound.java| 30 +++--- .../io/sstable/format/SSTableReader.java| 33 +++- 5 files changed, 42 insertions(+), 28 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87b15d1/CHANGES.txt -- diff --cc CHANGES.txt index b68e51c,f4e48ff..c1d5e94 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,50 -1,5 +1,51 @@@ +4.0 + * Cleanup ParentRepairSession after repairs (CASSANDRA-13359) + * Incremental repair not streaming correct sstables (CASSANDRA-13328) + * Upgrade the jna version to 4.3.0 (CASSANDRA-13300) + * Add the currentTimestamp, currentDate, currentTime and currentTimeUUID functions (CASSANDRA-13132) + * Remove config option index_interval (CASSANDRA-10671) + * Reduce lock contention for collection types and serializers (CASSANDRA-13271) + * Make it possible to override MessagingService.Verb ids (CASSANDRA-13283) + * Avoid synchronized on prepareForRepair in ActiveRepairService (CASSANDRA-9292) + * Adds the ability to use uncompressed chunks in compressed files (CASSANDRA-10520) + * Don't flush sstables when streaming for incremental repair (CASSANDRA-13226) + * Remove unused method (CASSANDRA-13227) + * Fix minor bugs related to #9143 (CASSANDRA-13217) + * Output warning if user increases RF (CASSANDRA-13079) + * Remove pre-3.0 streaming compatibility code for 4.0 (CASSANDRA-13081) + * Add support for + and - operations on dates (CASSANDRA-11936) + * Fix consistency of incrementally repaired data (CASSANDRA-9143) + * Increase commitlog version (CASSANDRA-13161) + * Make TableMetadata immutable, optimize Schema (CASSANDRA-9425) + * Refactor ColumnCondition (CASSANDRA-12981) + * Parallelize streaming of different keyspaces (CASSANDRA-4663) + * Improved compactions metrics (CASSANDRA-13015) + * Speed-up start-up sequence by avoiding un-needed flushes (CASSANDRA-13031) + * Use Caffeine (W-TinyLFU) for on-heap caches (CASSANDRA-10855) + * Thrift removal (CASSANDRA-5) + * Remove pre-3.0 compatibility code for 4.0 (CASSANDRA-12716) + * Add column definition kind to dropped columns in schema (CASSANDRA-12705) + * Add (automate) Nodetool Documentation (CASSANDRA-12672) + * Update bundled cqlsh python driver to 3.7.0 (CASSANDRA-12736) + * Reject invalid replication settings when creating or altering a keyspace (CASSANDRA-12681) + * Clean up the SSTableReader#getScanner API wrt removal of RateLimiter (CASSANDRA-12422) + * Use new token allocation for non bootstrap case as well (CASSANDRA-13080) + * Avoid byte-array copy when key cache is disabled (CASSANDRA-13084) + * Require forceful decommission if number of nodes is less than replication factor (CASSANDRA-12510) + * Allow IN restrictions on column families with collections (CASSANDRA-12654) + * Log message size in trace message in OutboundTcpConnection (CASSANDRA-13028) + * Add timeUnit Days for cassandra-stress (CASSANDRA-13029) + * Add mutation size and batch metrics (CASSANDRA-12649) + * Add method to get size of endpoints to TokenMetadata (CASSANDRA-12999) + * Expose time spent waiting in thread pool queue (CASSANDRA-8398) + * Conditionally update index built status to avoid unnecessary flushes (CASSANDRA-12969) + * cqlsh auto completion: refactor definition of compaction strategy options (CASSANDRA-12946) + * Add support for arithmetic operators (CASSANDRA-11935) + * Add histogram for delay to deliver hints (CASSANDRA-13234) + + 3.11.0 + * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) * Improve SASI range iterator efficiency on intersection with an empty range (CASSANDRA-12915). * Fix equality comparisons of columns using the duration type (CASSANDRA-13174) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a87b15d1/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java --
[1/3] cassandra git commit: Possible AssertionError in UnfilteredRowIteratorWithLowerBound
Repository: cassandra Updated Branches: refs/heads/cassandra-3.11 ec9ce3dfb -> f55cb88ab refs/heads/trunk 8b74ae4b6 -> a87b15d1d Possible AssertionError in UnfilteredRowIteratorWithLowerBound patch by Sylvain Lebresne; reviewed by Stefania for CASSANDRA-13366 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f55cb88a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f55cb88a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f55cb88a Branch: refs/heads/cassandra-3.11 Commit: f55cb88ab595ccb941ebb4a088ab90f860f463d5 Parents: ec9ce3d Author: Sylvain Lebresne Authored: Wed Mar 22 15:41:49 2017 +0100 Committer: Sylvain Lebresne Committed: Thu Mar 23 10:26:37 2017 +0100 -- CHANGES.txt | 1 + .../db/SinglePartitionReadCommand.java | 4 +-- .../db/compaction/CompactionController.java | 2 +- .../UnfilteredRowIteratorWithLowerBound.java| 31 ++--- .../io/sstable/format/SSTableReader.java| 35 5 files changed, 44 insertions(+), 29 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8386c20..f4e48ff 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) * Improve SASI range iterator efficiency on intersection with an empty range (CASSANDRA-12915). * Fix equality comparisons of columns using the duration type (CASSANDRA-13174) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java -- diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java index f6d10f5..724f59e 100644 --- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java +++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java @@ -584,7 +584,7 @@ public class SinglePartitionReadCommand extends ReadCommand if (!shouldInclude(sstable)) { nonIntersectingSSTables++; -if (sstable.hasTombstones()) +if (sstable.mayHaveTombstones()) { // if sstable has tombstones we need to check after one pass if it can be safely skipped if (skippedSSTablesWithTombstones == null) skippedSSTablesWithTombstones = new ArrayList<>(); @@ -773,7 +773,7 @@ public class SinglePartitionReadCommand extends ReadCommand // however: if it is set, it impacts everything and must be included. Getting that top-level partition deletion costs us // some seek in general however (unless the partition is indexed and is in the key cache), so we first check if the sstable // has any tombstone at all as a shortcut. -if (!sstable.hasTombstones()) +if (!sstable.mayHaveTombstones()) continue; // no tombstone at all, we can skip that sstable // We need to get the partition deletion and include it if it's live. In any case though, we're done with that sstable. http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/src/java/org/apache/cassandra/db/compaction/CompactionController.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionController.java b/src/java/org/apache/cassandra/db/compaction/CompactionController.java index 64c35d9..bf3647a 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java @@ -297,7 +297,7 @@ public class CompactionController implements AutoCloseable { if (reader.isMarkedSuspect() || reader.getMaxTimestamp() <= minTimestamp || -tombstoneOnly && !reader.hasTombstones()) +tombstoneOnly && !reader.mayHaveTombstones()) return null; RowIndexEntry position = reader.getPosition(key, SSTableReader.Operator.EQ); if (position == null) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIteratorWithLowerBound.java -- diff --git a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIteratorWithL
[2/3] cassandra git commit: Possible AssertionError in UnfilteredRowIteratorWithLowerBound
Possible AssertionError in UnfilteredRowIteratorWithLowerBound patch by Sylvain Lebresne; reviewed by Stefania for CASSANDRA-13366 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f55cb88a Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f55cb88a Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f55cb88a Branch: refs/heads/trunk Commit: f55cb88ab595ccb941ebb4a088ab90f860f463d5 Parents: ec9ce3d Author: Sylvain Lebresne Authored: Wed Mar 22 15:41:49 2017 +0100 Committer: Sylvain Lebresne Committed: Thu Mar 23 10:26:37 2017 +0100 -- CHANGES.txt | 1 + .../db/SinglePartitionReadCommand.java | 4 +-- .../db/compaction/CompactionController.java | 2 +- .../UnfilteredRowIteratorWithLowerBound.java| 31 ++--- .../io/sstable/format/SSTableReader.java| 35 5 files changed, 44 insertions(+), 29 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 8386c20..f4e48ff 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.11.0 + * Possible AssertionError in UnfilteredRowIteratorWithLowerBound (CASSANDRA-13366) * Support unaligned memory access for AArch64 (CASSANDRA-13326) * Improve SASI range iterator efficiency on intersection with an empty range (CASSANDRA-12915). * Fix equality comparisons of columns using the duration type (CASSANDRA-13174) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java -- diff --git a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java index f6d10f5..724f59e 100644 --- a/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java +++ b/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java @@ -584,7 +584,7 @@ public class SinglePartitionReadCommand extends ReadCommand if (!shouldInclude(sstable)) { nonIntersectingSSTables++; -if (sstable.hasTombstones()) +if (sstable.mayHaveTombstones()) { // if sstable has tombstones we need to check after one pass if it can be safely skipped if (skippedSSTablesWithTombstones == null) skippedSSTablesWithTombstones = new ArrayList<>(); @@ -773,7 +773,7 @@ public class SinglePartitionReadCommand extends ReadCommand // however: if it is set, it impacts everything and must be included. Getting that top-level partition deletion costs us // some seek in general however (unless the partition is indexed and is in the key cache), so we first check if the sstable // has any tombstone at all as a shortcut. -if (!sstable.hasTombstones()) +if (!sstable.mayHaveTombstones()) continue; // no tombstone at all, we can skip that sstable // We need to get the partition deletion and include it if it's live. In any case though, we're done with that sstable. http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/src/java/org/apache/cassandra/db/compaction/CompactionController.java -- diff --git a/src/java/org/apache/cassandra/db/compaction/CompactionController.java b/src/java/org/apache/cassandra/db/compaction/CompactionController.java index 64c35d9..bf3647a 100644 --- a/src/java/org/apache/cassandra/db/compaction/CompactionController.java +++ b/src/java/org/apache/cassandra/db/compaction/CompactionController.java @@ -297,7 +297,7 @@ public class CompactionController implements AutoCloseable { if (reader.isMarkedSuspect() || reader.getMaxTimestamp() <= minTimestamp || -tombstoneOnly && !reader.hasTombstones()) +tombstoneOnly && !reader.mayHaveTombstones()) return null; RowIndexEntry position = reader.getPosition(key, SSTableReader.Operator.EQ); if (position == null) http://git-wip-us.apache.org/repos/asf/cassandra/blob/f55cb88a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIteratorWithLowerBound.java -- diff --git a/src/java/org/apache/cassandra/db/rows/UnfilteredRowIteratorWithLowerBound.java b/src/java/org/apache/cassandra/db/rows/UnfilteredRowIteratorWithLowerBound.java index 14730ac..4536036 100644 --- a/src/java/
[jira] [Comment Edited] (CASSANDRA-13333) Cassandra does not start on Windows due to 'JNA link failure'
[ https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15934880#comment-15934880 ] Benjamin Lerer edited comment on CASSANDRA-1 at 3/23/17 9:25 AM: - ||[3.0|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.0]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-dtest/]| ||[3.11|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.11]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-dtest/]| ||[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:1-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-dtest/]| [~jasobrown], [~mkjellman] could one of you review the patches. Only 3.0 and 3.11 differ a bit. was (Author: blerer): ||[3.0|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.0]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-dtest/]| ||[3.11|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.11]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-dtest/]| ||[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-dtest/]| [~jasobrown], [~mkjellman] could one of you review the patches. Only 3.0 and 3.11 differ a bit. > Cassandra does not start on Windows due to 'JNA link failure' > - > > Key: CASSANDRA-1 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1 > Project: Cassandra > Issue Type: Bug >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer >Priority: Blocker > > Cassandra 3.0 HEAD does not start on Windows. The only error in the logs is: > {{ERROR 16:30:10 JNA failing to initialize properly.}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13333) Cassandra does not start on Windows due to 'JNA link failure'
[ https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937981#comment-15937981 ] Benjamin Lerer commented on CASSANDRA-1: [~mkjellman] Thanks for the reviews. bq. 1. Should the loading of {{Native.register("winmm")}} in {{WindowsTimer}} also be moved into NativeLibraryWindows? {{WindowsTimer}} is really specific to Windows and according to [~JoshuaMcKenzie] [comment|https://issues.apache.org/jira/browse/CASSANDRA-1?focusedCommentId=15929978&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15929978] we should not prevent startup due to an inability to access the {{winmm.dll}} library. So, I would be in favor of keeping it separeted for now. bq. 2. Looks like the trunk patch didn't get pushed up or potentially just a copy paste error? Currently it's just pointing at blerer/trunk. Sorry for that. It is a copy paste mistake. I fixed it. bq. 3. Thanks for putting the MSDN API URL in the method javadoc. I am pretty sure that otherwise, I will have to end up googling it in a month or two ;-) bq. 4. In NativeLibraryWindows I think the following logger statements could be simplified: I have pushed a new commit to fix it in all the branches. > Cassandra does not start on Windows due to 'JNA link failure' > - > > Key: CASSANDRA-1 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1 > Project: Cassandra > Issue Type: Bug >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer >Priority: Blocker > > Cassandra 3.0 HEAD does not start on Windows. The only error in the logs is: > {{ERROR 16:30:10 JNA failing to initialize properly.}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Comment Edited] (CASSANDRA-13333) Cassandra does not start on Windows due to 'JNA link failure'
[ https://issues.apache.org/jira/browse/CASSANDRA-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937079#comment-15937079 ] Benjamin Lerer edited comment on CASSANDRA-1 at 3/23/17 9:09 AM: - I force pushed a new patch. The new patch use the {{Kernel32}} library to support natively the {{callGetPid}} method and keep the startup check. As the Windows library is not the {{c}} one, the patch also rename {{CLibrary}} to {{NativeLibrary}} as the name was misleading. ||[3.0|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.0]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-dtest/]| ||[3.11|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.11]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-dtest/]| ||[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:1-trunk]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-dtest/]| was (Author: blerer): I force pushed a new patch. The new patch use the {{Kernel32}} library to support natively the {{callGetPid}} method and keep the startup check. As the Windows library is not the {{c}} one, the patch also rename {{CLibrary}} to {{NativeLibrary}} as the name was misleading. ||[3.0|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.0]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.0-dtest/]| ||[3.11|https://github.com/apache/cassandra/compare/trunk...blerer:1-3.11]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-3.11-dtest/]| ||[trunk|https://github.com/apache/cassandra/compare/trunk...blerer:trunk]|[utests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/blerer/job/blerer-1-trunk-dtest/]| > Cassandra does not start on Windows due to 'JNA link failure' > - > > Key: CASSANDRA-1 > URL: https://issues.apache.org/jira/browse/CASSANDRA-1 > Project: Cassandra > Issue Type: Bug >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer >Priority: Blocker > > Cassandra 3.0 HEAD does not start on Windows. The only error in the logs is: > {{ERROR 16:30:10 JNA failing to initialize properly.}} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Resolved] (CASSANDRA-13367) CASSANDRA-10855 breaks authentication: throws server error instead of bad credentials on cache load failure
[ https://issues.apache.org/jira/browse/CASSANDRA-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov resolved CASSANDRA-13367. - Resolution: Duplicate > CASSANDRA-10855 breaks authentication: throws server error instead of bad > credentials on cache load failure > --- > > Key: CASSANDRA-13367 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13367 > Project: Cassandra > Issue Type: Bug >Reporter: Alex Petrov >Assignee: Alex Petrov > -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Updated] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Petrov updated CASSANDRA-13113: Status: Patch Available (was: Open) > test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test > --- > > Key: CASSANDRA-13113 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13113 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCarthy >Assignee: Alex Petrov > Labels: dtest, test-failure > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/trunk_dtest/1466/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 358, in run > self.tearDown() > File "/home/automaton/cassandra-dtest/dtest.py", line 582, in tearDown > raise AssertionError('Unexpected error in log, see stdout') > {code}{code} > Standard Output > Unexpected error in node2 log, error: > ERROR [Native-Transport-Requests-1] 2017-01-08 21:10:55,056 Message.java:623 > - Unexpected exception during request; channel = [id: 0xf39c6dae, > L:/127.0.0.2:9042 - R:/127.0.0.1:43640] > java.lang.RuntimeException: > org.apache.cassandra.exceptions.UnavailableException: Cannot achieve > consistency level QUORUM > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:503) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310) > ~[main/:na] > at org.apache.cassandra.service.ClientState.login(ClientState.java:271) > ~[main/:na] > at > org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80) > ~[main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517) > [main/:na] > at > org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) > [main/:na] > at > io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357) > [netty-all-4.0.39.Final.jar:4.0.39.Final] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [na:1.8.0_45] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45] > Caused by: org.apache.cassandra.exceptions.UnavailableException: Cannot > achieve consistency level QUORUM > at > org.apache.cassandra.db.ConsistencyLevel.assureSufficientLiveNodes(ConsistencyLevel.java:334) > ~[main/:na] > at > org.apache.cassandra.service.AbstractReadExecutor.getReadExecutor(AbstractReadExecutor.java:162) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy$SinglePartitionReadLifecycle.(StorageProxy.java:1734) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.fetchRows(StorageProxy.java:1696) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.readRegular(StorageProxy.java:1642) > ~[main/:na] > at > org.apache.cassandra.service.StorageProxy.read(StorageProxy.java:1557) > ~[main/:na] > at > org.apache.cassandra.db.SinglePartitionReadCommand$Group.execute(SinglePartitionReadCommand.java:964) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:282) > ~[main/:na] > at > org.apache.cassandra.cql3.statements.SelectStatement.execute(SelectStatement.java:252) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRoleFromTable(CassandraRoleManager.java:511) > ~[main/:na] > at > org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:493) > ~[main/:na] > ... 13 common frames omitted > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (CASSANDRA-13113) test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
[ https://issues.apache.org/jira/browse/CASSANDRA-13113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937893#comment-15937893 ] Alex Petrov commented on CASSANDRA-13113: - I've investigated a bit deeper. Although in my opinion it's kind of a regression, even if it's not super-serious, but it has some user-facing implications. I've ran {{bisect}} and narrowed it down to [this commit|https://github.com/apache/cassandra/commit/c607d76413be81a0e125c5780e068d7ab7594612] Checking logs reveals that before this commit, we had error messages in the form of: {code} Error from server: code=0100 [Bad credentials] message="Error during authentication of user cassandra : org.apache.cassandra.exceptions.UnavailableException: Cannot achieve consistency level QUORUM" {code} After, it's changed to {code} Error from server: code= [Server error] message="java.lang.RuntimeException: org.apache.cassandra.exceptions.UnavailableException: Cannot achieve consistency level QUORUM" {code} I've checked underlying code and it looks like Guava was doing some unwrapping in case of runtime exceptions on [cache loading|http://grepcode.com/file/repo1.maven.org/maven2/com.google.guava/guava/11.0/com/google/common/cache/LocalCache.java#2234] (might be a wrong guava version but you get the idea). Previously, we had to unwrap the {{UncheckedExecutionException}} in order to extract cause and [turn it into authentication exception|https://github.com/ifesdjeen/cassandra/commit/c607d76413be81a0e125c5780e068d7ab7594612#diff-ef1e335e8d51911f09bcc735b0632c5cL97], in order to trigger a correct error code. Now, we don't have to since exception isn't un/rewrapped. The stack trace of the other exception that was happening and causing {{Server error}} instead of {{Bad Credentials}} was {code} at org.apache.cassandra.auth.CassandraRoleManager.getRole(CassandraRoleManager.java:487) [main/:na] at org.apache.cassandra.auth.CassandraRoleManager.canLogin(CassandraRoleManager.java:310) [main/:na] at org.apache.cassandra.service.ClientState.login(ClientState.java:271) [main/:na] at org.apache.cassandra.transport.messages.AuthResponse.execute(AuthResponse.java:80) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:517) [main/:na] at org.apache.cassandra.transport.Message$Dispatcher.channelRead0(Message.java:410) [main/:na] at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) [netty-all-4.0.39.Final.jar:4.0.39.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:366) [netty-all-4.0.39.Final.jar:4.0.39.Final] at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:35) [netty-all-4.0.39.Final.jar:4.0.39.Final] at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:357) [netty-all-4.0.39.Final.jar:4.0.39.Final] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_121] at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) [main/:na] at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [main/:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121] {code} Consequently, I have removed guava-specific exception rewrapping. The other places (JMX permissions cache, Credentials cache, Passwords cache and Permissions cache) look fine, with an exception with Permission cache where we do re-wrap an exception but that doesn't change bubbling/error code. |[trunk|https://github.com/apache/cassandra/compare/trunk...ifesdjeen:13367-trunk]|[dtest|https://cassci.datastax.com/job/ifesdjeen-13367-trunk-dtest/]|[testall|https://cassci.datastax.com/job/ifesdjeen-13367-trunk-testall/]| I wanted to add that it might be not a very good style that we're using exceptions for a control flow. We might want to think of another way to handle such things in future, at least for the top-tier user facing return codes. Or at least as it was mentioned in [the comment|https://issues.apache.org/jira/browse/CASSANDRA-10855?focusedCommentId=15789267&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15789267], we might want to cover such things with test for future (not necessarily unit tests even). > test failure in auth_test.TestAuth.system_auth_ks_is_alterable_test > --- > > Key: CASSANDRA-13113 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13113 > Project: Cassandra > Issue Type: Bug > Components: Testing >Reporter: Sean McCa
[jira] [Commented] (CASSANDRA-12456) dtest failure in auth_test.TestAuth.system_auth_ks_is_alterable_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937870#comment-15937870 ] Alex Petrov commented on CASSANDRA-12456: - DTest PR is merged, closing. > dtest failure in auth_test.TestAuth.system_auth_ks_is_alterable_test > > > Key: CASSANDRA-12456 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12456 > Project: Cassandra > Issue Type: Test >Reporter: Craig Kodman >Assignee: Jim Witschey > Labels: dtest > Attachments: node1_debug.log, node1_gc.log, node1.log, > node2_debug.log, node2_gc.log, node2.log, node3_debug.log, node3_gc.log, > node3.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest/675/testReport/auth_test/TestAuth/system_auth_ks_is_alterable_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/auth_test.py", line 49, in > system_auth_ks_is_alterable_test > session.cluster.refresh_schema_metadata() > File "cassandra/cluster.py", line 1606, in > cassandra.cluster.Cluster.refresh_schema_metadata (cassandra/cluster.c:29510) > raise DriverException("Schema metadata was not refreshed. See log for > details.") > "Schema metadata was not refreshed. See log for > details.\n >> begin captured logging << > \ndtest: DEBUG: cluster ccm directory: > /tmp/dtest-vDljld\ndtest: DEBUG: Done setting configuration options:\n{ > 'initial_token': None,\n'num_tokens': '32',\n'phi_convict_threshold': > 5,\n'range_request_timeout_in_ms': 1,\n > 'read_request_timeout_in_ms': 1,\n'request_timeout_in_ms': 1,\n > 'truncate_request_timeout_in_ms': 1,\n'write_request_timeout_in_ms': > 1}\ndtest: DEBUG: Default role created by node1\ndtest: DEBUG: nodes > started\n- >> end captured logging << > -" > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)