[ 
https://issues.apache.org/jira/browse/HDFS-13540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16478871#comment-16478871
 ] 

SammiChen commented on HDFS-13540:
----------------------------------

[~xiaochen], thanks for the explanation. It makes sense to change the Jira 
title as your proposal.  I double checked the code, *curStripeBuf* is only used 
in two EC read functions.

For the new test case, I would suggest,
 # change the name from testCloseDoesNotGetBuffer to  
testCloseDoesNotAllocateNewBuffer. It's more clear.
 # the test case always passes even when I use "true" in 
closeCurrentBlockReaders.  Because the *curStripeBuf* will be set to *null* 
after *stream.close* is called. So *assertNull(stream.getCurStripeBuf());* 
always stands.

The alternative to check whether buffer is allocated or not is to check the 
number of buffers holds by *ElasticByteBufferPool*. 

> DFSStripedInputStream should not allocate new buffers during close / unbuffer
> -----------------------------------------------------------------------------
>
>                 Key: HDFS-13540
>                 URL: https://issues.apache.org/jira/browse/HDFS-13540
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 3.0.0
>            Reporter: Xiao Chen
>            Assignee: Xiao Chen
>            Priority: Major
>         Attachments: HDFS-13540.01.patch, HDFS-13540.02.patch, 
> HDFS-13540.03.patch
>
>
> This was found in the same scenario where HDFS-13539 is caught.
> There are 2 OOM that looks interesting:
> {noformat}
> FSDataInputStream#close error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
>         at java.nio.Bits.reserveMemory(Bits.java:694)
>         at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
>         at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>         at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
>         at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:672)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.close(DFSStripedInputStream.java:181)
>         at java.io.FilterInputStream.close(FilterInputStream.java:181)
> {noformat}
> and 
> {noformat}
> org/apache/hadoop/fs/FSDataInputStream#unbuffer failed: error:
> OutOfMemoryError: Direct buffer memoryjava.lang.OutOfMemoryError: Direct 
> buffer memory
>         at java.nio.Bits.reserveMemory(Bits.java:694)
>         at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123)
>         at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
>         at 
> org.apache.hadoop.io.ElasticByteBufferPool.getBuffer(ElasticByteBufferPool.java:95)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.resetCurStripeBuffer(DFSStripedInputStream.java:118)
>         at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.closeCurrentBlockReaders(DFSStripedInputStream.java:205)
>         at 
> org.apache.hadoop.hdfs.DFSInputStream.unbuffer(DFSInputStream.java:1782)
>         at 
> org.apache.hadoop.fs.StreamCapabilitiesPolicy.unbuffer(StreamCapabilitiesPolicy.java:48)
>         at 
> org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:230)
> {noformat}
> As the stack trace goes, {{resetCurStripeBuffer}} will get buffer from the 
> buffer pool. We could save the cost of doing so if it's not for a read (e.g. 
> close, unbuffer etc.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to