[ 
https://issues.apache.org/jira/browse/HADOOP-15679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16583394#comment-16583394
 ] 

Thomas Marquardt commented on HADOOP-15679:
-------------------------------------------

+1, with some comments:

Slightly unrelated to this patch, but do you think FileSystem.closeAll should 
close the file systems sequentially under the FileSystem class lock like it 
currently does, or in parallel outside the FileSystem class lock?  I am 
thinking about opening a Jira and submitting a patch, but want to test the 
waters.

ShutdownHookManager.java
  L26: Curious, is the hadoop coding style to alphabetize the imports?
  L88-93: Doesn't hurt to be overly cautious, but how would this ever 
        be called twice? Also, the new Exception("here") stack trace should
        start and end with Thread.run a couple lines above.

TestShutdownHookManager.java
  L20: alphabetize imports?

 

Also, here are my test results:

*Tests run: 4122, Failures: 4, Errors: 1, Skipped: 351*

*[ERROR] 
TestCopyPreserveFlag.testDirectoryCpWithP:168->assertAttributesPreserved:95 
expected:<23456000> but was:<1534478390000>*
*[ERROR] TestIPC.testProxyUserBinding:1498->checkUserBinding:1514*
*[ERROR] TestIPC.testUserBinding:1493->checkUserBinding:1514*
*[ERROR] TestNativeCodeLoader.testNativeCodeLoaded:48 TestNativeCodeLoader: 
libhadoop.so testing was required, but libhadoop.so was not loaded.*
*[ERROR] TestRawLocalFileSystemContract.testPermission:112 » UnsatisfiedLink 
org.apache...*

I did not debug the test failures, my setup may be missing something.

 

 

> ShutdownHookManager shutdown time needs to be configurable & extended
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-15679
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15679
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: util
>    Affects Versions: 2.8.0, 3.0.0
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
>         Attachments: HADOOP-15679-001.patch, HADOOP-15679-002.patch, 
> HADOOP-15679-002.patch
>
>
> HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging 
> shutdowns. But the timeout is too short for applications where a large flush 
> of data is needed on shutdown.
> A key example of this is Spark apps which save their history to object 
> stores, where the file close() call triggers an upload of the final local 
> cached block of data (could be 32+MB), and then execute the final mutipart 
> commit.
> Proposed
> # make the default sleep time 30s, not 10s
> # make it configurable with a time duration property (with minimum time of 
> 1s.?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to