[ 
https://issues.apache.org/jira/browse/HDFS-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13670964#comment-13670964
 ] 

Hadoop QA commented on HDFS-4850:
---------------------------------

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12585488/HDFS-4850.001.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/4461//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/4461//console

This message is automatically generated.
                
> OfflineImageViewer fails on fsimage with empty file because of 
> NegativeArraySizeException
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-4850
>                 URL: https://issues.apache.org/jira/browse/HDFS-4850
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: tools
>    Affects Versions: 3.0.0
>            Reporter: Stephen Chu
>            Assignee: Jing Zhao
>              Labels: snapshot
>         Attachments: datadirs.tar.gz, fsimage_0000000000000000004, 
> fsimage_0000000000000000008, HDFS-4850.000.patch, HDFS-4850.001.patch, 
> oiv_out_1, oiv_out_2
>
>
> I deployed hadoop-trunk HDFS and created _/user/schu/_. I then forced a 
> checkpoint, fetched the fsimage, and ran the default OfflineImageViewer 
> successfully on the fsimage.
> {code}
> schu-mbp:~ schu$ hdfs oiv -i fsimage_0000000000000000004 -o oiv_out_1
> schu-mbp:~ schu$ cat oiv_out_1
> drwxr-xr-x  -     schu supergroup          0 2013-05-24 16:59 /
> drwxr-xr-x  -     schu supergroup          0 2013-05-24 16:59 /user
> drwxr-xr-x  -     schu supergroup          0 2013-05-24 16:59 /user/schu
> schu-mbp:~ schu$ 
> {code}
> I then touched an empty file _/user/schu/testFile1_
> {code}
> schu-mbp:~ schu$ hadoop fs -lsr /
> lsr: DEPRECATED: Please use 'ls -R' instead.
> drwxr-xr-x   - schu supergroup          0 2013-05-24 16:59 /user
> drwxr-xr-x   - schu supergroup          0 2013-05-24 17:00 /user/schu
> -rw-r--r--   1 schu supergroup          0 2013-05-24 17:00 
> /user/schu/testFile1
> {code}
> and forced another checkpoint, fetched the fsimage, and reran the 
> OfflineImageViewer. I encountered a NegativeArraySizeException:
> {code}
> schu-mbp:~ schu$ hdfs oiv -i fsimage_0000000000000000008 -o oiv_out_2
> Input ended unexpectedly.
> 2013-05-24 17:01:13,622 ERROR [main] offlineImageViewer.OfflineImageViewer 
> (OfflineImageViewer.java:go(140)) - image loading failed at offset 402
> Exception in thread "main" java.lang.NegativeArraySizeException
>       at org.apache.hadoop.io.Text.readString(Text.java:458)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processPermission(ImageLoaderCurrent.java:370)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processINode(ImageLoaderCurrent.java:671)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processChildren(ImageLoaderCurrent.java:557)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processDirectoryWithSnapshot(ImageLoaderCurrent.java:464)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processDirectoryWithSnapshot(ImageLoaderCurrent.java:470)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processDirectoryWithSnapshot(ImageLoaderCurrent.java:470)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processLocalNameINodesWithSnapshot(ImageLoaderCurrent.java:444)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.processINodes(ImageLoaderCurrent.java:398)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.ImageLoaderCurrent.loadImage(ImageLoaderCurrent.java:199)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.go(OfflineImageViewer.java:136)
>       at 
> org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewer.main(OfflineImageViewer.java:260)
> {code}
> This is reproducible. I've reproduced this scenario after formatting HDFS and 
> restarting and touching an empty file _/testFile1_.
> Attached are the data dirs, the fsimage before creating the empty file 
> (fsimage_0000000000000000004) and the fsimage afterwards 
> (fsimage_0000000000000000004) and their outputs, oiv_out_1 and oiv_out_2 
> respectively.
> The oiv_out_2 does not include the empty _/user/schu/testFile1_.
> I don't run into this problem using hadoop-2.0.4-alpha.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to