[ 
https://issues.apache.org/jira/browse/HADOOP-18238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17538428#comment-17538428
 ] 

Steve Loughran commented on HADOOP-18238:
-----------------------------------------

i see the problem. we should only set that reentrancy check after calling 
super.close(),as the delete on exit code has to finish before we shut down the 
pool of connections.

happy to accept a PR which moves the check down. 

> Hadoop 3.3.1 SFTPFileSystem.close() method have problem
> -------------------------------------------------------
>
>                 Key: HADOOP-18238
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18238
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common
>    Affects Versions: 3.3.1
>            Reporter: yi liu
>            Priority: Major
>
> @Override
> public void close() throws IOException {
> if (closed.getAndSet(true)) {
> return;
> }
> try {
> super.close();
> } finally {
> if (connectionPool != null) {
> connectionPool.shutdown();
> }
> }
> }
>  
> if  you  exe this method ,the  fs can not exec deleteOnExsist method,because 
> the fs is closed.
> 如果手动调用,sftp fs执行close方法关闭连接池,让jvm能正常退出,deleteOnExsist 
> 将因为fs已关闭无法执行成功。如果不关闭,则连接池不会释放,jvm不能退出。
> https://issues.apache.org/jira/browse/HADOOP-17528,这是3.2.0 sftpfilesystem的问题
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to