[ 
https://issues.apache.org/jira/browse/HDFS-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13640463#comment-13640463
 ] 

Hudson commented on HDFS-4732:
------------------------------

Integrated in Hadoop-Mapreduce-trunk #1409 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1409/])
    HDFS-4732. Fix TestDFSUpgradeFromImage which fails on Windows due to 
failure to unpack old image tarball that contains hard links.  Chris Nauroth 
(Revision 1471090)

     Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1471090
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-22-dfs-dir.tgz

                
> TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image 
> tarball that contains hard links
> ------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-4732
>                 URL: https://issues.apache.org/jira/browse/HDFS-4732
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.0.0
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>            Priority: Minor
>             Fix For: 3.0.0
>
>         Attachments: hadoop-22-dfs-dir.tgz, HDFS-4732.1.patch
>
>
> On non-Windows, {{FileUtil#unTar}} is implemented using external Unix shell 
> commands.  On Windows, {{FileUtil#unTar}} is implemented in Java using 
> commons-compress.  {{TestDFSUpgradeFromImage}} uses a testing tarball image 
> of an old HDFS layout version, hadoop-22-dfs-dir.tgz.  This file contains 
> hard links.  It appears that commons-compress cannot handle the hard links 
> correctly.  When it unpacks the file, each hard link ends up as a 0-length 
> file.  This causes the test to fail during cluster startup, because the 
> 0-length block files are considered corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to