[jira] [Commented] (HDFS-4732) TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image tarball that contains hard links

2013-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13640463#comment-13640463
 ] 

Hudson commented on HDFS-4732:
--

Integrated in Hadoop-Mapreduce-trunk #1409 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1409/])
HDFS-4732. Fix TestDFSUpgradeFromImage which fails on Windows due to 
failure to unpack old image tarball that contains hard links.  Chris Nauroth 
(Revision 1471090)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1471090
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-22-dfs-dir.tgz


> TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image 
> tarball that contains hard links
> 
>
> Key: HDFS-4732
> URL: https://issues.apache.org/jira/browse/HDFS-4732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-22-dfs-dir.tgz, HDFS-4732.1.patch
>
>
> On non-Windows, {{FileUtil#unTar}} is implemented using external Unix shell 
> commands.  On Windows, {{FileUtil#unTar}} is implemented in Java using 
> commons-compress.  {{TestDFSUpgradeFromImage}} uses a testing tarball image 
> of an old HDFS layout version, hadoop-22-dfs-dir.tgz.  This file contains 
> hard links.  It appears that commons-compress cannot handle the hard links 
> correctly.  When it unpacks the file, each hard link ends up as a 0-length 
> file.  This causes the test to fail during cluster startup, because the 
> 0-length block files are considered corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4732) TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image tarball that contains hard links

2013-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13640413#comment-13640413
 ] 

Hudson commented on HDFS-4732:
--

Integrated in Hadoop-Hdfs-trunk #1382 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1382/])
HDFS-4732. Fix TestDFSUpgradeFromImage which fails on Windows due to 
failure to unpack old image tarball that contains hard links.  Chris Nauroth 
(Revision 1471090)

 Result = FAILURE
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1471090
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-22-dfs-dir.tgz


> TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image 
> tarball that contains hard links
> 
>
> Key: HDFS-4732
> URL: https://issues.apache.org/jira/browse/HDFS-4732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-22-dfs-dir.tgz, HDFS-4732.1.patch
>
>
> On non-Windows, {{FileUtil#unTar}} is implemented using external Unix shell 
> commands.  On Windows, {{FileUtil#unTar}} is implemented in Java using 
> commons-compress.  {{TestDFSUpgradeFromImage}} uses a testing tarball image 
> of an old HDFS layout version, hadoop-22-dfs-dir.tgz.  This file contains 
> hard links.  It appears that commons-compress cannot handle the hard links 
> correctly.  When it unpacks the file, each hard link ends up as a 0-length 
> file.  This causes the test to fail during cluster startup, because the 
> 0-length block files are considered corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4732) TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image tarball that contains hard links

2013-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13640328#comment-13640328
 ] 

Hudson commented on HDFS-4732:
--

Integrated in Hadoop-Yarn-trunk #193 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/193/])
HDFS-4732. Fix TestDFSUpgradeFromImage which fails on Windows due to 
failure to unpack old image tarball that contains hard links.  Chris Nauroth 
(Revision 1471090)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1471090
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-22-dfs-dir.tgz


> TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image 
> tarball that contains hard links
> 
>
> Key: HDFS-4732
> URL: https://issues.apache.org/jira/browse/HDFS-4732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-22-dfs-dir.tgz, HDFS-4732.1.patch
>
>
> On non-Windows, {{FileUtil#unTar}} is implemented using external Unix shell 
> commands.  On Windows, {{FileUtil#unTar}} is implemented in Java using 
> commons-compress.  {{TestDFSUpgradeFromImage}} uses a testing tarball image 
> of an old HDFS layout version, hadoop-22-dfs-dir.tgz.  This file contains 
> hard links.  It appears that commons-compress cannot handle the hard links 
> correctly.  When it unpacks the file, each hard link ends up as a 0-length 
> file.  This causes the test to fail during cluster startup, because the 
> 0-length block files are considered corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4732) TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image tarball that contains hard links

2013-04-23 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13639409#comment-13639409
 ] 

Hudson commented on HDFS-4732:
--

Integrated in Hadoop-trunk-Commit #3650 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3650/])
HDFS-4732. Fix TestDFSUpgradeFromImage which fails on Windows due to 
failure to unpack old image tarball that contains hard links.  Chris Nauroth 
(Revision 1471090)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1471090
Files : 
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgradeFromImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/hadoop-22-dfs-dir.tgz


> TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image 
> tarball that contains hard links
> 
>
> Key: HDFS-4732
> URL: https://issues.apache.org/jira/browse/HDFS-4732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: hadoop-22-dfs-dir.tgz, HDFS-4732.1.patch
>
>
> On non-Windows, {{FileUtil#unTar}} is implemented using external Unix shell 
> commands.  On Windows, {{FileUtil#unTar}} is implemented in Java using 
> commons-compress.  {{TestDFSUpgradeFromImage}} uses a testing tarball image 
> of an old HDFS layout version, hadoop-22-dfs-dir.tgz.  This file contains 
> hard links.  It appears that commons-compress cannot handle the hard links 
> correctly.  When it unpacks the file, each hard link ends up as a 0-length 
> file.  This causes the test to fail during cluster startup, because the 
> 0-length block files are considered corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4732) TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image tarball that contains hard links

2013-04-23 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13639363#comment-13639363
 ] 

Hadoop QA commented on HDFS-4732:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12580073/HDFS-4732.1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4295//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4295//console

This message is automatically generated.

> TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image 
> tarball that contains hard links
> 
>
> Key: HDFS-4732
> URL: https://issues.apache.org/jira/browse/HDFS-4732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: hadoop-22-dfs-dir.tgz, HDFS-4732.1.patch
>
>
> On non-Windows, {{FileUtil#unTar}} is implemented using external Unix shell 
> commands.  On Windows, {{FileUtil#unTar}} is implemented in Java using 
> commons-compress.  {{TestDFSUpgradeFromImage}} uses a testing tarball image 
> of an old HDFS layout version, hadoop-22-dfs-dir.tgz.  This file contains 
> hard links.  It appears that commons-compress cannot handle the hard links 
> correctly.  When it unpacks the file, each hard link ends up as a 0-length 
> file.  This causes the test to fail during cluster startup, because the 
> 0-length block files are considered corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HDFS-4732) TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image tarball that contains hard links

2013-04-23 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13639217#comment-13639217
 ] 

Chris Nauroth commented on HDFS-4732:
-

There are 2 components to this patch:

# I'm providing a new copy of hadoop-22-dfs-dir.tgz, attached to this issue.  
It's equivalent to the old one, but does not contain links.  As a side effect, 
the file is bigger, because bytes are duplicated where previously they were 
shared via hard links.  Before: ~170K compressed, ~40 MB uncompressed.  After: 
~310K compressed, ~80 MB uncompressed.  The compressed version is what's stored 
in the repo, so this doesn't really make the source tree much bigger.
# While investigating, I noticed a code path in {{TestDFSUpgradeFromImage}} 
where we could fail to shutdown a {{MiniDFSCluster}}.  In practice, this isn't 
an issue, but I'd like to clean it up.

To the committer: please download and commit hadoop-22-dfs-dir.tgz independent 
of the patch file.  It's a binary file, so I couldn't include it in the patch 
file.  It needs to go in hadoop-hdfs-project/hadoop-hdfs/src/test/resources and 
overwrite the existing file.

I tested this successfully on Mac (bsdtar), Ubuntu (GNU tar), and Windows 
(commons-compress).


> TestDFSUpgradeFromImage fails on Windows due to failure to unpack old image 
> tarball that contains hard links
> 
>
> Key: HDFS-4732
> URL: https://issues.apache.org/jira/browse/HDFS-4732
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Chris Nauroth
>Assignee: Chris Nauroth
> Attachments: hadoop-22-dfs-dir.tgz
>
>
> On non-Windows, {{FileUtil#unTar}} is implemented using external Unix shell 
> commands.  On Windows, {{FileUtil#unTar}} is implemented in Java using 
> commons-compress.  {{TestDFSUpgradeFromImage}} uses a testing tarball image 
> of an old HDFS layout version, hadoop-22-dfs-dir.tgz.  This file contains 
> hard links.  It appears that commons-compress cannot handle the hard links 
> correctly.  When it unpacks the file, each hard link ends up as a 0-length 
> file.  This causes the test to fail during cluster startup, because the 
> 0-length block files are considered corrupt.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira