[jira] [Commented] (DRILL-1976) Possible Memory Leak in drill jdbc client when dealing with wide columns (5000 chars long)

2015-09-17 Thread Chris Westin (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14804632#comment-14804632
 ] 

Chris Westin commented on DRILL-1976:
-

[~rkins], how is the generation script supposed to be used? I tried

./wide-strings.sh > widestrings.json

But the shell is seeing a lot of
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence
tr: Illegal byte sequence

And what json records do appear in the file do not have wide character string 
columns:
{
"id":14,
"str_fixed":"wx",
"str_var":"",
"str_empty":"",
"str_null":"",
"str_empty_null":"",
"str_var_null_empty":"",
"str_fixed_null_empty":"U",
"tinyint_var":93,
"dec_var_prec5_sc2":.
}
{
"id":15,
"str_fixed":"",
"str_var":"",
"str_empty":"",
"str_null":null,
"str_empty_null":null,
"str_var_null_empty":"",
"str_fixed_null_empty":"U",
"tinyint_var":-105,
"dec_var_prec5_sc2":-.0
}



> Possible Memory Leak in drill jdbc client when dealing with wide columns 
> (5000 chars long)
> --
>
> Key: DRILL-1976
> URL: https://issues.apache.org/jira/browse/DRILL-1976
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Rahul Challapalli
>Assignee: Chris Westin
> Fix For: 1.2.0
>
> Attachments: wide-strings.sh
>
>
> git.commit.id.abbrev=b491cdb
> I am seeing an execution failure when I execute the same query multiple times 
> (<10). The data file contains 9 columns out of which 7 are wide strings 
> (4000-5000 chars long)
> {code}
> select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
> str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
> ws.tinyint_var = sub.max_ti
> {code}
> Below are my memory settings :
> {code}
> DRILL_MAX_DIRECT_MEMORY="32G"
> DRILL_MAX_HEAP="4G"
> {code}
> Error From the JDBC client
> {code}
> select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
> str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
> ws.tinyint_var = sub.max_ti
> Exception in pipeline.  Closing channel between local /10.10.100.190:38179 
> and remote qa-node191.qa.lab/10.10.100.191:31010
> io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct 
> buffer memory
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:151)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
>   at java.nio.Bits.reserveMemory(Bits.java:658)
>   at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
>   at 
> io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:443)
>   at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:187)
>   at io.netty.buffer.PoolArena.allocate(PoolArena.java:165)
>   at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280)
>   at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110)
>   at 
> io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841)
>   at 

[jira] [Commented] (DRILL-1976) Possible Memory Leak in drill jdbc client when dealing with wide columns (5000 chars long)

2015-09-17 Thread Chris Westin (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14804845#comment-14804845
 ] 

Chris Westin commented on DRILL-1976:
-

The new script didn't work on OSX either. I ftp'ed the original to a linux VM, 
generated the data file, and then ftp'ed the data file back.

Once on OSX, I set up the memory parameters as you specified above. I was able 
to run the query 11 times in a row without errors, and shut down cleanly 
without any memory leaks reported. (This is on current master, which now has 
the majority of the fixes spawned by the new allocator, but not the new 
allocator itself yet). So it looks like whatever it was is fixed.

> Possible Memory Leak in drill jdbc client when dealing with wide columns 
> (5000 chars long)
> --
>
> Key: DRILL-1976
> URL: https://issues.apache.org/jira/browse/DRILL-1976
> Project: Apache Drill
>  Issue Type: Bug
>  Components: Execution - Flow
>Reporter: Rahul Challapalli
>Assignee: Chris Westin
> Fix For: 1.2.0
>
> Attachments: wide-strings-mod.sh, wide-strings.sh
>
>
> git.commit.id.abbrev=b491cdb
> I am seeing an execution failure when I execute the same query multiple times 
> (<10). The data file contains 9 columns out of which 7 are wide strings 
> (4000-5000 chars long)
> {code}
> select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
> str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
> ws.tinyint_var = sub.max_ti
> {code}
> Below are my memory settings :
> {code}
> DRILL_MAX_DIRECT_MEMORY="32G"
> DRILL_MAX_HEAP="4G"
> {code}
> Error From the JDBC client
> {code}
> select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
> str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
> ws.tinyint_var = sub.max_ti
> Exception in pipeline.  Closing channel between local /10.10.100.190:38179 
> and remote qa-node191.qa.lab/10.10.100.191:31010
> io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct 
> buffer memory
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:151)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at 
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
>   at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
>   at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
>   at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
>   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.OutOfMemoryError: Direct buffer memory
>   at java.nio.Bits.reserveMemory(Bits.java:658)
>   at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
>   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
>   at 
> io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:443)
>   at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:187)
>   at io.netty.buffer.PoolArena.allocate(PoolArena.java:165)
>   at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280)
>   at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110)
>   at 
> io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841)
>   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831)
>   at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:600)
>   at 
> io.netty.buffer.UnsafeDirectLittleEndian.writeBytes(UnsafeDirectLittleEndian.java:25)
>   at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:144)
>   ... 13 more
> Channel closed between local /10.10.100.190:38179 and remote 
> 

[jira] [Commented] (DRILL-1976) Possible Memory Leak in drill jdbc client when dealing with wide columns (5000 chars long)

2015-04-01 Thread Chris Westin (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14391568#comment-14391568
 ] 

Chris Westin commented on DRILL-1976:
-

From the stack, it doesn't necessarily look like a leak, but we did run out of 
memory -- it could be a legitimate OOM, but will require deeper investigation.

 Possible Memory Leak in drill jdbc client when dealing with wide columns 
 (5000 chars long)
 --

 Key: DRILL-1976
 URL: https://issues.apache.org/jira/browse/DRILL-1976
 Project: Apache Drill
  Issue Type: Bug
  Components: Execution - Flow
Reporter: Rahul Challapalli
Assignee: Chris Westin
 Fix For: 0.9.0

 Attachments: wide-strings.sh


 git.commit.id.abbrev=b491cdb
 I am seeing an execution failure when I execute the same query multiple times 
 (10). The data file contains 9 columns out of which 7 are wide strings 
 (4000-5000 chars long)
 {code}
 select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
 str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
 ws.tinyint_var = sub.max_ti
 {code}
 Below are my memory settings :
 {code}
 DRILL_MAX_DIRECT_MEMORY=32G
 DRILL_MAX_HEAP=4G
 {code}
 Error From the JDBC client
 {code}
 select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
 str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
 ws.tinyint_var = sub.max_ti
 Exception in pipeline.  Closing channel between local /10.10.100.190:38179 
 and remote qa-node191.qa.lab/10.10.100.191:31010
 io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct 
 buffer memory
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:151)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
   at 
 io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
   at 
 io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
   at 
 io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.OutOfMemoryError: Direct buffer memory
   at java.nio.Bits.reserveMemory(Bits.java:658)
   at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:123)
   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
   at 
 io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:443)
   at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:187)
   at io.netty.buffer.PoolArena.allocate(PoolArena.java:165)
   at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280)
   at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110)
   at 
 io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849)
   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841)
   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831)
   at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:600)
   at 
 io.netty.buffer.UnsafeDirectLittleEndian.writeBytes(UnsafeDirectLittleEndian.java:25)
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:144)
   ... 13 more
 Channel closed between local /10.10.100.190:38179 and remote 
 qa-node191.qa.lab/10.10.100.191:31010
 Channel is closed, discarding remaining 255231 byte(s) in buffer.
 {code}
 The logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (DRILL-1976) Possible Memory Leak in drill jdbc client when dealing with wide columns (5000 chars long)

2015-03-09 Thread Parth Chandra (JIRA)

[ 
https://issues.apache.org/jira/browse/DRILL-1976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14353979#comment-14353979
 ] 

Parth Chandra commented on DRILL-1976:
--

Looks like an Out of Memory error occurred. Reassigning for further 
investigation.

 Possible Memory Leak in drill jdbc client when dealing with wide columns 
 (5000 chars long)
 --

 Key: DRILL-1976
 URL: https://issues.apache.org/jira/browse/DRILL-1976
 Project: Apache Drill
  Issue Type: Bug
  Components: Client - JDBC
Reporter: Rahul Challapalli
Assignee: Daniel Barclay (Drill)
 Fix For: 0.9.0

 Attachments: wide-strings.sh


 git.commit.id.abbrev=b491cdb
 I am seeing an execution failure when I execute the same query multiple times 
 (10). The data file contains 9 columns out of which 7 are wide strings 
 (4000-5000 chars long)
 {code}
 select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
 str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
 ws.tinyint_var = sub.max_ti
 {code}
 Below are my memory settings :
 {code}
 DRILL_MAX_DIRECT_MEMORY=32G
 DRILL_MAX_HEAP=4G
 {code}
 Error From the JDBC client
 {code}
 select ws.*, sub.str_var str_var1 from widestrings ws INNER JOIN (select 
 str_var, max(tinyint_var) max_ti from widestrings group by str_var) sub on 
 ws.tinyint_var = sub.max_ti
 Exception in pipeline.  Closing channel between local /10.10.100.190:38179 
 and remote qa-node191.qa.lab/10.10.100.191:31010
 io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct 
 buffer memory
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:151)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
   at 
 io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
   at 
 io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:333)
   at 
 io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:319)
   at 
 io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787)
   at 
 io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
   at 
 io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
   at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
   at 
 io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.OutOfMemoryError: Direct buffer memory
   at java.nio.Bits.reserveMemory(Bits.java:658)
   at java.nio.DirectByteBuffer.init(DirectByteBuffer.java:123)
   at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306)
   at 
 io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:443)
   at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:187)
   at io.netty.buffer.PoolArena.allocate(PoolArena.java:165)
   at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280)
   at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110)
   at 
 io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251)
   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849)
   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841)
   at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831)
   at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:600)
   at 
 io.netty.buffer.UnsafeDirectLittleEndian.writeBytes(UnsafeDirectLittleEndian.java:25)
   at 
 io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:144)
   ... 13 more
 Channel closed between local /10.10.100.190:38179 and remote 
 qa-node191.qa.lab/10.10.100.191:31010
 Channel is closed, discarding remaining 255231 byte(s) in buffer.
 {code}
 The logs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)