[ https://issues.apache.org/jira/browse/DRILL-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14804845#comment-14804845 ]
Chris Westin commented on DRILL-1976: ------------------------------------- The new script didn't work on OSX either. I ftp'ed the original to a linux VM, generated the data file, and then ftp'ed the data file back. Once on OSX, I set up the memory parameters as you specified above. I was able to run the query 11 times in a row without errors, and shut down cleanly without any memory leaks reported. (This is on current master, which now has the majority of the fixes spawned by the new allocator, but not the new allocator itself yet). So it looks like whatever it was is fixed. > Possible Memory Leak in drill jdbc client when dealing with wide columns > (5000 chars long) > ------------------------------------------------------------------------------------------ > > Key: DRILL-1976 > URL: https://issues.apache.org/jira/browse/DRILL-1976 > Project: Apache Drill > Issue Type: Bug > Components: Execution - Flow > Reporter: Rahul Challapalli > Assignee: Chris Westin > Fix For: 1.2.0 > > Attachments: wide-strings-mod.sh, wide-strings.sh > > > git.commit.id.abbrev=b491cdb > I am seeing an execution failure when I execute the same query multiple times > (<10). The data file contains 9 columns out of which 7 are wide strings > (4000-5000 chars long) > {code} > select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select > str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on > ws.tinyint_var = sub.max_ti > {code} > Below are my memory settings : > {code} > DRILL_MAX_DIRECT_MEMORY="32G" > DRILL_MAX_HEAP="4G" > {code} > Error From the JDBC client > {code} > select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select > str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on > ws.tinyint_var = sub.max_ti > Exception in pipeline. Closing channel between local /10.10.100.190:38179 > and remote qa-node191.qa.lab/10.10.100.191:31010 > io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct > buffer memory > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:151) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) > at > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333) > at > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319) > at > io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) > at > io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) > at > io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) > at > io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) > at > io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.OutOfMemoryError: Direct buffer memory > at java.nio.Bits.reserveMemory(Bits.java:658) > at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) > at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) > at > io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:443) > at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:187) > at io.netty.buffer.PoolArena.allocate(PoolArena.java:165) > at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280) > at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110) > at > io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841) > at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831) > at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:600) > at > io.netty.buffer.UnsafeDirectLittleEndian.writeBytes(UnsafeDirectLittleEndian.java:25) > at > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:144) > ... 13 more > Channel closed between local /10.10.100.190:38179 and remote > qa-node191.qa.lab/10.10.100.191:31010 > Channel is closed, discarding remaining 255231 byte(s) in buffer. > {code} > The logs -- This message was sent by Atlassian JIRA (v6.3.4#6332)