[ https://issues.apache.org/jira/browse/CASSANDRA-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13806141#comment-13806141 ]
Daniel Norberg commented on CASSANDRA-5981: ------------------------------------------- Looks to me like it might discard too much data if buffer.readableBytes() > MAX_FRAME_LENGTH. Unless I'm mistaken this problem is also present in the original LengthFieldBasedFrameDecoder though. [~norman], what do you say? Admittedly it's a corner case that's unlikely to be encountered in production. Are there any tests for the dropping of too large requests? > Netty frame length exception when storing data to Cassandra using binary > protocol > --------------------------------------------------------------------------------- > > Key: CASSANDRA-5981 > URL: https://issues.apache.org/jira/browse/CASSANDRA-5981 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Linux, Java 7 > Reporter: Justin Sweeney > Assignee: Sylvain Lebresne > Priority: Minor > Fix For: 2.0.2 > > Attachments: 0001-Correctly-catch-frame-too-long-exceptions.txt, > 0002-Allow-to-configure-the-max-frame-length.txt, 5981-v2.txt > > > Using Cassandra 1.2.8, I am running into an issue where when I send a large > amount of data using the binary protocol, I get the following netty exception > in the Cassandra log file: > {quote} > ERROR 09:08:35,845 Unexpected exception during request > org.jboss.netty.handler.codec.frame.TooLongFrameException: Adjusted frame > length exceeds 268435456: 292413714 - discarded > at > org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:441) > at > org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:412) > at > org.jboss.netty.handler.codec.frame.LengthFieldBasedFrameDecoder.decode(LengthFieldBasedFrameDecoder.java:372) > at org.apache.cassandra.transport.Frame$Decoder.decode(Frame.java:181) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:422) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) > at > org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:84) > at > org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:472) > at > org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:333) > at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:35) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) > at java.lang.Thread.run(Thread.java:722) > {quote} > I am using the Datastax driver and using CQL to execute insert queries. The > query that is failing is using atomic batching executing a large number of > statements (~55). > Looking into the code a bit, I saw that in the > org.apache.cassandra.transport.Frame$Decoder class, the MAX_FRAME_LENGTH is > hard coded to 256 mb. > Is this something that should be configurable or is this a hard limit that > will prevent batch statements of this size from executing for some reason? -- This message was sent by Atlassian JIRA (v6.1#6144)