[ 
https://issues.apache.org/jira/browse/HDFS-16906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16906:
----------------------------------
    Labels: pull-request-available  (was: )

> CryptoOutputStream::close leak when encrypted zones + quota exceptions
> ----------------------------------------------------------------------
>
>                 Key: HDFS-16906
>                 URL: https://issues.apache.org/jira/browse/HDFS-16906
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: dfsclient
>    Affects Versions: 3.3.1, 3.3.2, 3.3.3, 3.3.4
>            Reporter: Colm Dougan
>            Assignee: Colm Dougan
>            Priority: Critical
>              Labels: pull-request-available
>         Attachments: hadoop_cryto_stream_close_try_finally.diff
>
>
> {color:#172b4d}I would like to report an issue with a resource leak 
> ({color}DFSOutputStream objects) when using the (java) hadoop-hdfs-client
> And specifically (at least in my case) when there is a combination of:
>  * encrypted zones
>  * quota space exceptions (DSQuotaExceededException)
> As you know, when encrypted zones are in play, when calling fs.create(path) 
> in the hadoop-hdfs-client it will return a HdfsDataOutputStream stream object 
> which wraps a CryptoOutputStream object which then wraps a DFSOutputStream 
> object.
> Even though my code is correctly calling stream.close() on the above I can 
> see from debugging that the underlying DFSOutputStream objects are being 
> leaked. 
> Specifically I see the DFSOutputStream objects being leaked in the 
> filesBeingWritten map in DFSClient.  (i.e. the DFSOutputStream objects remain 
> in the map even though I've called close() on the stream object).
> I suspect this is due to a bug in CryptoOutputStream::close
> {code:java}
>   @Override                                                                   
>                                 
>   public synchronized void close() throws IOException {                       
>                                 
>     if (closed) {                                                             
>                                 
>       return;                                                                 
>                                 
>     }                                                                         
>                                 
>     try {                                                                     
>                                 
>       flush();                                                                
>                                 
>       if (closeOutputStream) {                                                
>                                 
>         super.close();                                                        
>                                 
>         codec.close();                                                        
>                                 
>       }                                                                       
>                                 
>       freeBuffers();                                                          
>                                 
>     } finally {                                                               
>                                 
>       closed = true;                                                          
>                                 
>     }                                                                         
>                                 
>   }{code}
> ... whereby if flush() throws (observed in my case when a 
> DSQuotaExceededException exception is thrown due to quota exceeded) then the 
> super.close() on the underlying DFSOutputStream is skipped.
> In my case I had a space quota set up on a given directory which is also in 
> an encrypted zone and so each attempt to create and write to a file failed 
> and leaked as above.
> I have attached a speculative patch 
> ([^hadoop_cryto_stream_close_try_finally.diff]) which simply wraps the 
> flush() in a try .. finally.  The patch resolves the problem in my testing.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to