[jira] [Updated] (CASSANDRA-9613) Omit (de)serialization of state variable in UDAs
[ https://issues.apache.org/jira/browse/CASSANDRA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-9613: Status: Patch Available (was: Awaiting Feedback) (set to PA - but there's still Tyler's first comment that might need discussion) > Omit (de)serialization of state variable in UDAs > > > Key: CASSANDRA-9613 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9613 > Project: Cassandra > Issue Type: Improvement >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Minor > Fix For: 3.x > > > Currently the result of each UDA's state function call is serialized and then > deserialized for the next state-function invocation and optionally final > function invocation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-8831) Create a system table to expose prepared statements
[ https://issues.apache.org/jira/browse/CASSANDRA-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-8831: Status: Patch Available (was: Open) > Create a system table to expose prepared statements > --- > > Key: CASSANDRA-8831 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8831 > Project: Cassandra > Issue Type: Improvement >Reporter: Sylvain Lebresne >Assignee: Robert Stupp > Labels: client-impacting, docs-impacting > Fix For: 3.x > > > Because drivers abstract from users the handling of up/down nodes, they have > to deal with the fact that when a node is restarted (or join), it won't know > any prepared statement. Drivers could somewhat ignore that problem and wait > for a query to return an error (that the statement is unknown by the node) to > re-prepare the query on that node, but it's relatively inefficient because > every time a node comes back up, you'll get bad latency spikes due to some > queries first failing, then being re-prepared and then only being executed. > So instead, drivers (at least the java driver but I believe others do as > well) pro-actively re-prepare statements when a node comes up. It solves the > latency problem, but currently every driver instance blindly re-prepare all > statements, meaning that in a large cluster with many clients there is a lot > of duplication of work (it would be enough for a single client to prepare the > statements) and a bigger than necessary load on the node that started. > An idea to solve this it to have a (cheap) way for clients to check if some > statements are prepared on the node. There is different options to provide > that but what I'd suggest is to add a system table to expose the (cached) > prepared statements because: > # it's reasonably straightforward to implement: we just add a line to the > table when a statement is prepared and remove it when it's evicted (we > already have eviction listeners). We'd also truncate the table on startup but > that's easy enough). We can even switch it to a "virtual table" if/when > CASSANDRA-7622 lands but it's trivial to do with a normal table in the > meantime. > # it doesn't require a change to the protocol or something like that. It > could even be done in 2.1 if we wish to. > # exposing prepared statements feels like a genuinely useful information to > have (outside of the problem exposed here that is), if only for > debugging/educational purposes. > The exposed table could look something like: > {noformat} > CREATE TABLE system.prepared_statements ( >keyspace_name text, >table_name text, >prepared_id blob, >query_string text, >PRIMARY KEY (keyspace_name, table_name, prepared_id) > ) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-9613) Omit (de)serialization of state variable in UDAs
[ https://issues.apache.org/jira/browse/CASSANDRA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369341#comment-15369341 ] Robert Stupp commented on CASSANDRA-9613: - Yes, there's no built-in function that is actually usable as a state or final function. The contract for all functions (built-ins and UDFs) is still to pass serialized arguments (deserialization is part of the code in {{JavaSourceUDF.txt}} and the respective scripted UDF impl). This patch is only an optimization for the state variable since that is probably of a type that has a higher serialisation cost (e.g. map, tuple, udt). But it would be a generally affordable optimization to let built-in and especially UDFs take the non-serialized representation - thinking of "constant" arguments to UDFs. But at the moment we don't have the case where we pass "constant" arguments to UDFs (state functions especially). Added comments to {{generateArguments}} and the new utest and triggered CI. > Omit (de)serialization of state variable in UDAs > > > Key: CASSANDRA-9613 > URL: https://issues.apache.org/jira/browse/CASSANDRA-9613 > Project: Cassandra > Issue Type: Improvement >Reporter: Robert Stupp >Assignee: Robert Stupp >Priority: Minor > Fix For: 3.x > > > Currently the result of each UDA's state function call is serialized and then > deserialized for the next state-function invocation and optionally final > function invocation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8831) Create a system table to expose prepared statements
[ https://issues.apache.org/jira/browse/CASSANDRA-8831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369330#comment-15369330 ] Robert Stupp commented on CASSANDRA-8831: - Was an oversight. It didn't invalidate on {{QueryProcessor.internalStatements}}. Fixed that and triggered a new CI run (above links work). > Create a system table to expose prepared statements > --- > > Key: CASSANDRA-8831 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8831 > Project: Cassandra > Issue Type: Improvement >Reporter: Sylvain Lebresne >Assignee: Robert Stupp > Labels: client-impacting, docs-impacting > Fix For: 3.x > > > Because drivers abstract from users the handling of up/down nodes, they have > to deal with the fact that when a node is restarted (or join), it won't know > any prepared statement. Drivers could somewhat ignore that problem and wait > for a query to return an error (that the statement is unknown by the node) to > re-prepare the query on that node, but it's relatively inefficient because > every time a node comes back up, you'll get bad latency spikes due to some > queries first failing, then being re-prepared and then only being executed. > So instead, drivers (at least the java driver but I believe others do as > well) pro-actively re-prepare statements when a node comes up. It solves the > latency problem, but currently every driver instance blindly re-prepare all > statements, meaning that in a large cluster with many clients there is a lot > of duplication of work (it would be enough for a single client to prepare the > statements) and a bigger than necessary load on the node that started. > An idea to solve this it to have a (cheap) way for clients to check if some > statements are prepared on the node. There is different options to provide > that but what I'd suggest is to add a system table to expose the (cached) > prepared statements because: > # it's reasonably straightforward to implement: we just add a line to the > table when a statement is prepared and remove it when it's evicted (we > already have eviction listeners). We'd also truncate the table on startup but > that's easy enough). We can even switch it to a "virtual table" if/when > CASSANDRA-7622 lands but it's trivial to do with a normal table in the > meantime. > # it doesn't require a change to the protocol or something like that. It > could even be done in 2.1 if we wish to. > # exposing prepared statements feels like a genuinely useful information to > have (outside of the problem exposed here that is), if only for > debugging/educational purposes. > The exposed table could look something like: > {noformat} > CREATE TABLE system.prepared_statements ( >keyspace_name text, >table_name text, >prepared_id blob, >query_string text, >PRIMARY KEY (keyspace_name, table_name, prepared_id) > ) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12149) NullPointerException on SELECT with SASI index
[ https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369283#comment-15369283 ] Andrey Konstantinov edited comment on CASSANDRA-12149 at 7/9/16 8:45 PM: - Yes, there is no NPE without token condition, but NPE is an issue, it aborts all connections with a client. You said that token condition is useless when there is partition key constraint. Thank you for this, and could you, please, clarify few things for me?: I use cassandra-spark-connector to generate token ranges for me to partition my custom RDD in spark (even if I know it hits a single partition in Cassandra, and CassandraPartitionGenerator from the connector knows this too). I could use clustering column to partition SELECT results within a single Cassandra partition, but in this case I would need to know values of a clustering column (and I do not know this in time of a query). How could I partition SELECT results hitting a single large Cassandra partition, when I do not know values of clustering columns? Thanks! was (Author: avkonst): Yes, there is no NPE without token condition, but NPE is an issue, it aborts all connections with a client. You said that token condition is useless when there is partition key constraint. Thank you for this, and could you, please, clarify few things for me: I use cassandra-spark-connector to generate token ranges for me to partition my custom RDD in spark (even if I know it hits a single partition in Cassandra, and CassandraPartitionGenerator from the connector knows this too). I could use clustering column to partition SELECT results within a single Cassandra partition, but in this case I would need to know values of a clustering column (and I do not know this in time of a query). How could I partition SELECT results hitting a single large Cassandra partition, when I do not know values of clustering columns? Thanks! > NullPointerException on SELECT with SASI index > -- > > Key: CASSANDRA-12149 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12149 > Project: Cassandra > Issue Type: Bug > Components: sasi >Reporter: Andrey Konstantinov > Attachments: CASSANDRA-12149.txt > > > If I execute the sequence of queries (see the attached file), Cassandra > aborts a connection reporting NPE on server side. SELECT query without token > range filter works, but does not work when token range filter is specified. > My intent was to issue multiple SELECT queries targeting the same single > partition, filtered by a column indexed by SASI, partitioning results by > different token ranges. > Output from cqlsh on SELECT is the following: > cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM > mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND > feature1 > 11 AND feature1 < 31 AND token(namespace, entity) <= > 9223372036854775807; > ServerError: message="java.lang.NullPointerException"> -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index
[ https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369283#comment-15369283 ] Andrey Konstantinov commented on CASSANDRA-12149: - Yes, there is no NPE without token condition, but NPE is an issue, it aborts all connections with a client. You said that token condition is useless when there is partition key constraint. Thank you for this, and could you, please, clarify few things for me: I use cassandra-spark-connector to generate token ranges for me to partition my custom RDD in spark (even if I know it hits a single partition in Cassandra, and CassandraPartitionGenerator from the connector knows this too). I could use clustering column to partition SELECT results within a single Cassandra partition, but in this case I would need to know values of a clustering column (and I do not know this in time of a query). How could I partition SELECT results hitting a single large Cassandra partition, when I do not know values of clustering columns? Thanks! > NullPointerException on SELECT with SASI index > -- > > Key: CASSANDRA-12149 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12149 > Project: Cassandra > Issue Type: Bug > Components: sasi >Reporter: Andrey Konstantinov > Attachments: CASSANDRA-12149.txt > > > If I execute the sequence of queries (see the attached file), Cassandra > aborts a connection reporting NPE on server side. SELECT query without token > range filter works, but does not work when token range filter is specified. > My intent was to issue multiple SELECT queries targeting the same single > partition, filtered by a column indexed by SASI, partitioning results by > different token ranges. > Output from cqlsh on SELECT is the following: > cqlsh> SELECT namespace, entity, timestamp, feature1, feature2 FROM > mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND > feature1 > 11 AND feature1 < 31 AND token(namespace, entity) <= > 9223372036854775807; > ServerError: message="java.lang.NullPointerException"> -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12149) NullPointerException on SELECT with SASI index
[ https://issues.apache.org/jira/browse/CASSANDRA-12149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369264#comment-15369264 ] DOAN DuyHai commented on CASSANDRA-12149: - The query used = {{SELECT namespace, entity, timestamp, feature1, feature2 FROM mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND feature1= 11 AND token(namespace, entity) <= 9223372036854775807;}} Ok, after successfully re-producing the NPE in debug mode, it is *not* a SASI bug but is rather related to how Cassandra optimises the *WHERE* clause in general. NPE root cause is {{slice.bound(b) == null}} in {{TokenRestriction.bounds(Bound b, QueryOptions options)}}: {code:java} public List bounds(Bound b, QueryOptions options) throws InvalidRequestException { return Collections.singletonList(slice.bound(b).bindAndGet(options)); } {code} Going up the call stack, the culprit is at {{StatementRestrictions.getPartitionKeyBounds(IPartitioner p, QueryOptions options)}}: {code:java} private AbstractBounds getPartitionKeyBounds(IPartitioner p, QueryOptions options) { ByteBuffer startKeyBytes = getPartitionKeyBound(Bound.START, options); ByteBuffer finishKeyBytes = getPartitionKeyBound(Bound.END, options); {code} Since we have no lower bound restriction on the {{token(namespace, entity)}}, the call to {{getPartitionKeyBound(Bound.START, options)}} generates the NPE. I try another query without using secondary index {{SELECT namespace, entity, timestamp, feature1, feature2 FROM mykeyspace.myrecordtable WHERE namespace = 'ns2' AND entity = 'entity2' AND token(namespace, entity) <= 9223372036854775807;}} and this time, no NPE. The place in the call stack where both queries diverge is at {{SelectStatements.getQuery(QueryOptions options, int nowInSec, int userLimit, int perPartitionLimit)}}: {code:java} public ReadQuery getQuery(QueryOptions options, int nowInSec, int userLimit, int perPartitionLimit) throws RequestValidationException { DataLimits limit = getDataLimits(userLimit, perPartitionLimit); if (restrictions.isKeyRange() || restrictions.usesSecondaryIndexing()) return getRangeCommand(options, limit, nowInSec); return getSliceCommands(options, limit, nowInSec); } {code} When using secondary index, we fall in the *if* condition obviously. When NOT using secondary index as per my 2nd SELECT statement, the *if* condition is not verified so the code path is {{return getSliceCommands(options, limit, nowInSec);}}. Strangely enough, even with the 2nd SELECT, {{restrictions.isKeyRange()}} should be *true*, why is it *false* The culprit is at {{StatementRestrictions.processPartitionKeyRestrictions(boolean hasQueriableIndex)}} {code:java} ... if (partitionKeyRestrictions.isOnToken()) isKeyRange = true; if (hasUnrestrictedPartitionKeyComponents()) { if (!partitionKeyRestrictions.isEmpty()) { if (!hasQueriableIndex) throw invalidRequest("Partition key parts: %s must be restricted as other parts are", Joiner.on(", ").join(getPartitionKeyUnrestrictedComponents())); } isKeyRange = true; usesSecondaryIndexing = hasQueriableIndex; } } {code} Condition {{if (partitionKeyRestrictions.isOnToken())}} evaluates to *false* so variable _isKeyRange_ is never set to *true*. Condition {{if (partitionKeyRestrictions.isOnToken())}} evaluates to *false* because of {{TokenFilter.isOnToken()}}: {code:java} public boolean isOnToken() { // if all partition key columns have non-token restrictions, we can simply use the token range to filter // those restrictions and then ignore the token range return restrictions.size() < tokenRestriction.size(); } {code} Here, since there are *more* conditions on primary key than on token restriction, the SELECT is not considered to be a token restriction, which is sensible. By the way, if we look at the original WHERE clause {{WHERE namespace = 'ns2' AND entity = 'entity2' AND feature1= 11 AND token(namespace, entity) <= 9223372036854775807;}}, the restriction using *token* function is *useless* because we already provide the complete partition key restriction. And I tried the original query by removing {{token(namespace, entity) <= 9223372036854775807}} and it works like a charm. I would consider this JIRA to be *not an issue* cc [~xedin] [~beobal] and [~avkonst] > NullPointerException on SELECT with SASI index > -- > > Key: CASSANDRA-12149 >
[jira] [Updated] (CASSANDRA-12156) java.lang.ClassCastException During Write Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12156: Description: During a regular ETL process today suddenly i am getting some errors from cassandra which look like :- {code} ERROR [SharedPool-Worker-28] 2016-07-09 00:07:04,062 Message.java:611 - Unexpected exception during request; channel = [id: 0x7e101236, /ip.add.re.ss:36421 => /ip.add.re.ss:9044] io.netty.handler.codec.DecoderException: java.lang.ClassCastException: java.lang.String cannot be cast to [B at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:971) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72] Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to [B at io.netty.buffer.PooledHeapByteBuf.array(PooledHeapByteBuf.java:280) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at org.apache.cassandra.transport.FrameCompressor$LZ4Compressor.decompress(FrameCompressor.java:191) ~[apache-cassandra-3.0.7.jar:3.0.7] at org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:310) ~[apache-cassandra-3.0.7.jar:3.0.7] at org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:289) ~[apache-cassandra-3.0.7.jar:3.0.7] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] ... 18 common frames omitted {code} it didn't affect the application though but i haven't seen it before and seems like something is wrong. was: During a regular ETL process today suddenly i am getting some errors from cassandra which look like :- ERROR [SharedPool-Worker-28] 2016-07-09 00:07:04,062 Message.java:611 - Unexpected exception during request; channel = [id: 0x7e101236, /ip.add.re.ss:36421 => /ip.add.re.ss:9044] io.netty.handler.codec.DecoderException: java.lang.ClassCastException: java.lang.String cannot be cast to [B at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.fir
[jira] [Updated] (CASSANDRA-12156) java.lang.ClassCastException During Write Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12156: Fix Version/s: 3.0.x > java.lang.ClassCastException During Write Operations > > > Key: CASSANDRA-12156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12156 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Centos 6.7, JDK1.8.0_72, Cassandra 3.0.7 >Reporter: vin01 >Priority: Minor > Fix For: 3.0.x > > > During a regular ETL process today suddenly i am getting some errors from > cassandra which look like :- > ERROR [SharedPool-Worker-28] 2016-07-09 00:07:04,062 Message.java:611 - > Unexpected exception during request; channel = [id: 0x7e101236, > /ip.add.re.ss:36421 => /ip.add.re.ss:9044] > io.netty.handler.codec.DecoderException: java.lang.ClassCastException: > java.lang.String cannot be cast to [B > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:971) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72] > Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to [B > at > io.netty.buffer.PooledHeapByteBuf.array(PooledHeapByteBuf.java:280) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > org.apache.cassandra.transport.FrameCompressor$LZ4Compressor.decompress(FrameCompressor.java:191) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:310) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:289) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > ... 18 common frames omitted > it didn't affect the application though but i haven't seen it before and > seems like something is wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10635) Add metrics for authentication failures
[ https://issues.apache.org/jira/browse/CASSANDRA-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369087#comment-15369087 ] Chris Lohfink commented on CASSANDRA-10635: --- Your branch looks good to me, I like the idea of it as a meter instead of counter > Add metrics for authentication failures > --- > > Key: CASSANDRA-10635 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10635 > Project: Cassandra > Issue Type: Improvement >Reporter: Soumava Ghosh >Assignee: Soumava Ghosh >Priority: Minor > Fix For: 3.x > > Attachments: 10635-2.1.txt, 10635-2.2.txt, 10635-3.0.txt, > 10635-dtest.patch, 10635-trunk.patch > > > There should be no auth failures on a cluster in general. > Having metrics around the authentication code would help detect clients > that are connecting to the wrong cluster or have auth incorrectly configured. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12156) java.lang.ClassCastException During Write Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369023#comment-15369023 ] vin01 edited comment on CASSANDRA-12156 at 7/9/16 9:14 AM: --- And followin this there were more errors like :- ERROR [SharedPool-Worker-14] 2016-07-09 00:09:23,547 Message.java:611 - Unexpected exception during request; channel = [id: 0x7e101236, /ip.add.re.ss:36421 => $ java.lang.ClassCastException: null no stack-trace for these was there. was (Author: vin01): And followin this there were more errors like :- ERROR [SharedPool-Worker-14] 2016-07-09 00:09:23,547 Message.java:611 - Unexpected exception during request; channel = [id: 0x7e101236, /192.168.100.91:36421 => $ java.lang.ClassCastException: null no stack-trace for these was there. > java.lang.ClassCastException During Write Operations > > > Key: CASSANDRA-12156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12156 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Centos 6.7, JDK1.8.0_72, Cassandra 3.0.7 >Reporter: vin01 >Priority: Minor > > During a regular ETL process today suddenly i am getting some errors from > cassandra which look like :- > ERROR [SharedPool-Worker-28] 2016-07-09 00:07:04,062 Message.java:611 - > Unexpected exception during request; channel = [id: 0x7e101236, > /ip.add.re.ss:36421 => /ip.add.re.ss:9044] > io.netty.handler.codec.DecoderException: java.lang.ClassCastException: > java.lang.String cannot be cast to [B > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:971) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72] > Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to [B > at > io.netty.buffer.PooledHeapByteBuf.array(PooledHeapByteBuf.java:280) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > org.apache.cassandra.transport.FrameCompressor$LZ4Compressor.decompress(FrameCompressor.java:191) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:310
[jira] [Updated] (CASSANDRA-12156) java.lang.ClassCastException During Write Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] vin01 updated CASSANDRA-12156: -- Priority: Minor (was: Major) > java.lang.ClassCastException During Write Operations > > > Key: CASSANDRA-12156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12156 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Centos 6.7, JDK1.8.0_72, Cassandra 3.0.7 >Reporter: vin01 >Priority: Minor > > During a regular ETL process today suddenly i am getting some errors from > cassandra which look like :- > ERROR [SharedPool-Worker-28] 2016-07-09 00:07:04,062 Message.java:611 - > Unexpected exception during request; channel = [id: 0x7e101236, > /ip.add.re.ss:36421 => /ip.add.re.ss:9044] > io.netty.handler.codec.DecoderException: java.lang.ClassCastException: > java.lang.String cannot be cast to [B > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:971) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72] > Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to [B > at > io.netty.buffer.PooledHeapByteBuf.array(PooledHeapByteBuf.java:280) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > org.apache.cassandra.transport.FrameCompressor$LZ4Compressor.decompress(FrameCompressor.java:191) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:310) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:289) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > ... 18 common frames omitted > it didn't affect the application though but i haven't seen it before and > seems like something is wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12156) java.lang.ClassCastException During Write Operations
[ https://issues.apache.org/jira/browse/CASSANDRA-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369023#comment-15369023 ] vin01 commented on CASSANDRA-12156: --- And followin this there were more errors like :- ERROR [SharedPool-Worker-14] 2016-07-09 00:09:23,547 Message.java:611 - Unexpected exception during request; channel = [id: 0x7e101236, /192.168.100.91:36421 => $ java.lang.ClassCastException: null no stack-trace for these was there. > java.lang.ClassCastException During Write Operations > > > Key: CASSANDRA-12156 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12156 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Centos 6.7, JDK1.8.0_72, Cassandra 3.0.7 >Reporter: vin01 > > During a regular ETL process today suddenly i am getting some errors from > cassandra which look like :- > ERROR [SharedPool-Worker-28] 2016-07-09 00:07:04,062 Message.java:611 - > Unexpected exception during request; channel = [id: 0x7e101236, > /ip.add.re.ss:36421 => /ip.add.re.ss:9044] > io.netty.handler.codec.DecoderException: java.lang.ClassCastException: > java.lang.String cannot be cast to [B > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:971) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72] > Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to [B > at > io.netty.buffer.PooledHeapByteBuf.array(PooledHeapByteBuf.java:280) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > at > org.apache.cassandra.transport.FrameCompressor$LZ4Compressor.decompress(FrameCompressor.java:191) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:310) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:289) > ~[apache-cassandra-3.0.7.jar:3.0.7] > at > io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) > ~[netty-all-4.0.23.Final.jar:4.0.23.Final] > ... 18 common frames omitted > it didn't affect the
[jira] [Created] (CASSANDRA-12156) java.lang.ClassCastException During Write Operations
vin01 created CASSANDRA-12156: - Summary: java.lang.ClassCastException During Write Operations Key: CASSANDRA-12156 URL: https://issues.apache.org/jira/browse/CASSANDRA-12156 Project: Cassandra Issue Type: Bug Components: Core Environment: Centos 6.7, JDK1.8.0_72, Cassandra 3.0.7 Reporter: vin01 During a regular ETL process today suddenly i am getting some errors from cassandra which look like :- ERROR [SharedPool-Worker-28] 2016-07-09 00:07:04,062 Message.java:611 - Unexpected exception during request; channel = [id: 0x7e101236, /ip.add.re.ss:36421 => /ip.add.re.ss:9044] io.netty.handler.codec.DecoderException: java.lang.ClassCastException: java.lang.String cannot be cast to [B at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:99) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:163) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:971) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:854) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:249) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:722) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72] Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to [B at io.netty.buffer.PooledHeapByteBuf.array(PooledHeapByteBuf.java:280) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at org.apache.cassandra.transport.FrameCompressor$LZ4Compressor.decompress(FrameCompressor.java:191) ~[apache-cassandra-3.0.7.jar:3.0.7] at org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:310) ~[apache-cassandra-3.0.7.jar:3.0.7] at org.apache.cassandra.transport.Frame$Decompressor.decode(Frame.java:289) ~[apache-cassandra-3.0.7.jar:3.0.7] at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] ... 18 common frames omitted it didn't affect the application though but i haven't seen it before and seems like something is wrong. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12031) "LEAK DETECTED" during incremental repairs
[ https://issues.apache.org/jira/browse/CASSANDRA-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369016#comment-15369016 ] vin01 commented on CASSANDRA-12031: --- I have moved to 3.0.7 now and haven't seen this yet :) > "LEAK DETECTED" during incremental repairs > -- > > Key: CASSANDRA-12031 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12031 > Project: Cassandra > Issue Type: Bug > Components: Streaming and Messaging > Environment: Centos 6.6, x86_64, Cassandra 2.2.4 >Reporter: vin01 >Priority: Minor > > I encountered some errors during an incremental repair session which look > like :- > ERROR [Reference-Reaper:1] 2016-06-19 03:28:35,884 Ref.java:187 - LEAK > DETECTED: a reference > (org.apache.cassandra.utils.concurrent.Ref$State@2ce0fab3) to class > org.apache.cassandra.io.util.SafeMemory$MemoryTidy@1513857473:Memory@[7f2d462191f0..7f2d46219510) > was not released before the reference was garbage collected > Should i be worried about these? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12155) proposeCallback.java is too spammy for debug.log
[ https://issues.apache.org/jira/browse/CASSANDRA-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng reassigned CASSANDRA-12155: Assignee: Wei Deng > proposeCallback.java is too spammy for debug.log > > > Key: CASSANDRA-12155 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12155 > Project: Cassandra > Issue Type: Bug > Components: Observability >Reporter: Wei Deng >Assignee: Wei Deng >Priority: Minor > > As stated in [this wiki > page|https://wiki.apache.org/cassandra/LoggingGuidelines] derived from the > work on CASSANDRA-10241, the DEBUG level logging in debug.log is intended for > "+low frequency state changes or message passing. Non-critical path logs on > operation details, performance measurements or general troubleshooting > information.+" > However, it appears that in a production deployment of C* 3.x, the LWT > message passing from ProposeCallback.java gets printed every 1-2 seconds, > which overwhelms debug.log from presenting the other important DEBUG level > logging messages, like the following: > {noformat} > DEBUG [SharedPool-Worker-2] 2016-07-09 05:23:57,800 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,803 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,804 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:03,807 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:03,807 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:06,811 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:06,811 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:09,815 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:09,815 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:12,819 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:12,819 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:15,823 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:15,823 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:18,827 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:18,827 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:21,831 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:21,831 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:27,839 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:27,839 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:33,847 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:33,847 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,851 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,852 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:39,855 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:39,855 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:42,859 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-
[jira] [Updated] (CASSANDRA-12155) proposeCallback.java is too spammy for debug.log
[ https://issues.apache.org/jira/browse/CASSANDRA-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Deng updated CASSANDRA-12155: - Status: Patch Available (was: Open) > proposeCallback.java is too spammy for debug.log > > > Key: CASSANDRA-12155 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12155 > Project: Cassandra > Issue Type: Bug > Components: Observability >Reporter: Wei Deng >Assignee: Wei Deng >Priority: Minor > > As stated in [this wiki > page|https://wiki.apache.org/cassandra/LoggingGuidelines] derived from the > work on CASSANDRA-10241, the DEBUG level logging in debug.log is intended for > "+low frequency state changes or message passing. Non-critical path logs on > operation details, performance measurements or general troubleshooting > information.+" > However, it appears that in a production deployment of C* 3.x, the LWT > message passing from ProposeCallback.java gets printed every 1-2 seconds, > which overwhelms debug.log from presenting the other important DEBUG level > logging messages, like the following: > {noformat} > DEBUG [SharedPool-Worker-2] 2016-07-09 05:23:57,800 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,803 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,804 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:03,807 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:03,807 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:06,811 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:06,811 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:09,815 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:09,815 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:12,819 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:12,819 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:15,823 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:15,823 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:18,827 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:18,827 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:21,831 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:21,831 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:27,839 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:27,839 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:33,847 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:33,847 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,851 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,852 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:39,855 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:39,855 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:42,859 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedP
[jira] [Commented] (CASSANDRA-12155) proposeCallback.java is too spammy for debug.log
[ https://issues.apache.org/jira/browse/CASSANDRA-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15368977#comment-15368977 ] Wei Deng commented on CASSANDRA-12155: -- [trunk|https://github.com/weideng1/cassandra/commit/aeebd734c42643ec3554751ca4bb7278e5f77cef] As it's just a log level change, not sure if we need CI. > proposeCallback.java is too spammy for debug.log > > > Key: CASSANDRA-12155 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12155 > Project: Cassandra > Issue Type: Bug > Components: Observability >Reporter: Wei Deng >Priority: Minor > > As stated in [this wiki > page|https://wiki.apache.org/cassandra/LoggingGuidelines] derived from the > work on CASSANDRA-10241, the DEBUG level logging in debug.log is intended for > "+low frequency state changes or message passing. Non-critical path logs on > operation details, performance measurements or general troubleshooting > information.+" > However, it appears that in a production deployment of C* 3.x, the LWT > message passing from ProposeCallback.java gets printed every 1-2 seconds, > which overwhelms debug.log from presenting the other important DEBUG level > logging messages, like the following: > {noformat} > DEBUG [SharedPool-Worker-2] 2016-07-09 05:23:57,800 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,803 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:00,804 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:03,807 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:03,807 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:06,811 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:06,811 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:09,815 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:09,815 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:12,819 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:12,819 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:15,823 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:15,823 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:18,827 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:18,827 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:21,831 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:21,831 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:24,835 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:27,839 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:27,839 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:30,843 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:33,847 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:33,847 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,851 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:36,852 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-1] 2016-07-09 05:24:39,855 ProposeCallback.java:62 > - Propose response true from /10.240.0.2 > DEBUG [SharedPool-Worker-2] 2016-07-09 05:24:39,855 ProposeCallback.java:62 > - Propose response true from /10.240.0.3 >