[jira] [Created] (HDFS-8566) HDFS debug Command usage is wrong.

2015-06-09 Thread surendra singh lilhore (JIRA)
surendra singh lilhore created HDFS-8566:


 Summary: HDFS debug Command usage is wrong.
 Key: HDFS-8566
 URL: https://issues.apache.org/jira/browse/HDFS-8566
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.0
Reporter: surendra singh lilhore
Assignee: surendra singh lilhore


http://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html#recoverLease

{code}
Usage: hdfs dfs recoverLease [-path path] [-retries num-retries]
{code}

*Expected:*

{code}
Usage: hdfs debug recoverLease [-path path] [-retries num-retries]
{code}

same for {{verify}} command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8567) Erasure Coding: SafeMode handles file smaller than a full stripe

2015-06-09 Thread Walter Su (JIRA)
Walter Su created HDFS-8567:
---

 Summary: Erasure Coding: SafeMode handles file smaller than a full 
stripe
 Key: HDFS-8567
 URL: https://issues.apache.org/jira/browse/HDFS-8567
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


Upload 3 small files, and restart NN. It can't leave safemode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8563) Erasure Coding: fsck handles file smaller than a full stripe

2015-06-09 Thread Walter Su (JIRA)
Walter Su created HDFS-8563:
---

 Summary: Erasure Coding: fsck handles file smaller than a full 
stripe
 Key: HDFS-8563
 URL: https://issues.apache.org/jira/browse/HDFS-8563
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su


Uploaded a small file. Fsck shows it's UNRECOVERABLE. It's not correct.
{noformat}
Erasure Coded Block Groups:
 Total size:1366 B
 Total files:   1
 Total block groups (validated):1 (avg. block group size 1366 B)
  
  UNRECOVERABLE BLOCK GROUPS:   1 (100.0 %)
  MIN REQUIRED EC BLOCK:6
  
 Minimally erasure-coded block groups:  0 (0.0 %)
 Over-erasure-coded block groups:   0 (0.0 %)
 Under-erasure-coded block groups:  1 (100.0 %)
 Unsatisfactory placement block groups: 0 (0.0 %)
 Default schema:RS-6-3
 Average block group size:  4.0
 Missing block groups:  0
 Corrupt block groups:  0
 Missing ec-blocks: 5 (55.57 %)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8564) BlockPoolSlice.checkDirs() will trigger excessive IO while traversing all sub-directories under finalizedDir

2015-06-09 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HDFS-8564:
---

 Summary: BlockPoolSlice.checkDirs() will trigger excessive IO 
while traversing all sub-directories under finalizedDir
 Key: HDFS-8564
 URL: https://issues.apache.org/jira/browse/HDFS-8564
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, HDFS
Affects Versions: 3.0.0
Reporter: Esteban Gutierrez
Priority: Critical


DataNodes continuously call checkDiskErrorAsync() for multiple operations in 
the DN in order to verify if a volume hasn't experienced any failure. When 
DN.startCheckDiskErrorThread() is invoked we need to traverse all configured 
data volumes on a DN to see which volumes need to be removed (see 
FsVolumeList.checkDir(s)) however that means that for each directory on 
BlockPoolSlice  we need to call DiskChecker.checkDirs() which will recursively 
will look into the rbw, tmp and finalized directories:

{code}
void checkDirs() throws DiskErrorException {
DiskChecker.checkDirs(finalizedDir);
DiskChecker.checkDir(tmpDir);
DiskChecker.checkDir(rbwDir);
  }
{code}

Unfortunately after HDFS-6482, the subdirectory structure is created with the 
following algorithm:

{code}
public static File idToBlockDir(File root, long blockId) {
int d1 = (int)((blockId  16)  0xff);
int d2 = (int)((blockId  8)  0xff);
String path = DataStorage.BLOCK_SUBDIR_PREFIX + d1 + SEP +
DataStorage.BLOCK_SUBDIR_PREFIX + d2;
return new File(root, path);
  }
{code}

Which leaves each data volume with  64K directories (256 directories x 256 
subdirectories) A side effect of this is that if the dentries haven't been 
cached by the OS, then the DN needs to recursively scan up to 64k directories x 
the number of configured data volumes (x number of files) impacting IO for 
other operations while DiskChecker.checkDirs(finalizedDir) is running.

There are few possibilities to address this problem:

1. Do not scan at all finalizedDir
2. Limit to one level the number of sub directories to scan recursively. (256)
3. Remove a subdirectory immediately it doesn't have any block under it. 










--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2151

2015-06-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2151/changes

Changes:

[stevel] HADOOP-12052 IPC client downgrades all exception types to IOE, breaks 
callers trying to use them. (Brahma Reddy Battula via stevel)

[cnauroth] HDFS-8554. TestDatanodeLayoutUpgrade fails on Windows. Contributed 
by Chris Nauroth.

[jianhe] YARN-2716. Refactor ZKRMStateStore retry code with Apache Curator. 
Contributed by Karthik Kambatla

[ozawa] MAPREDUCE-6388. Remove deprecation warnings from JobHistoryServer 
classes. Contributed by Ray Chiang.

[xgong] YARN-3778. Fix Yarn resourcemanger CLI usage. Contributed by Brahma 
Reddy Battula

[arp] HADOOP-12054. RPC client should not retry for InvalidToken exceptions. 
(Contributed by Varun Saxena)

[cnauroth] HDFS-8553. Document hdfs class path options. Contributed by Brahma 
Reddy Battula.

[cnauroth] YARN-3786. Document yarn class path options. Contributed by Brahma 
Reddy Battula.

[cnauroth] MAPREDUCE-6392. Document mapred class path options. Contributed by 
Brahma Reddy Battula.

[cmccabe] HADOOP-11347. RawLocalFileSystem#mkdir and create should honor umask 
(Varun Saxena via Colin P. McCabe)

[xyao] HDFS-8552. Fix hdfs CLI usage message for namenode and zkfc. Contributed 
by Brahma Reddy Battula

[cnauroth] HADOOP-12073. Azure FileSystem PageBlobInputStream does not return 
-1 on EOF. Contributed by Ivan Mitic.

[zjshen] YARN-3787. Allowed generic history service to load a number of 
applications whose started time is within the given range. Contributed by Xuan 
Gong.

--
[...truncated 6659 lines...]
Running org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.215 sec - in 
org.apache.hadoop.hdfs.web.TestOffsetUrlInputStream
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.243 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.72 sec - in 
org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.517 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.367 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.624 sec - in 
org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.014 sec - 
in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 28.627 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.967 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.193 sec - in 
org.apache.hadoop.hdfs.web.TestURLConnectionFactory
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.117 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.646 sec - in 
org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.929 sec - in 
org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.289 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.375 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsContentLength
Running org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.407 sec - in 
org.apache.hadoop.hdfs.web.TestByteRangeInputStream
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.893 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.701 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #212

2015-06-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/212/changes

Changes:

[stevel] HADOOP-12052 IPC client downgrades all exception types to IOE, breaks 
callers trying to use them. (Brahma Reddy Battula via stevel)

[cnauroth] HDFS-8554. TestDatanodeLayoutUpgrade fails on Windows. Contributed 
by Chris Nauroth.

[jianhe] YARN-2716. Refactor ZKRMStateStore retry code with Apache Curator. 
Contributed by Karthik Kambatla

[ozawa] MAPREDUCE-6388. Remove deprecation warnings from JobHistoryServer 
classes. Contributed by Ray Chiang.

[xgong] YARN-3778. Fix Yarn resourcemanger CLI usage. Contributed by Brahma 
Reddy Battula

[arp] HADOOP-12054. RPC client should not retry for InvalidToken exceptions. 
(Contributed by Varun Saxena)

[cnauroth] HDFS-8553. Document hdfs class path options. Contributed by Brahma 
Reddy Battula.

[cnauroth] YARN-3786. Document yarn class path options. Contributed by Brahma 
Reddy Battula.

[cnauroth] MAPREDUCE-6392. Document mapred class path options. Contributed by 
Brahma Reddy Battula.

[cmccabe] HADOOP-11347. RawLocalFileSystem#mkdir and create should honor umask 
(Varun Saxena via Colin P. McCabe)

[xyao] HDFS-8552. Fix hdfs CLI usage message for namenode and zkfc. Contributed 
by Brahma Reddy Battula

[cnauroth] HADOOP-12073. Azure FileSystem PageBlobInputStream does not return 
-1 on EOF. Contributed by Ivan Mitic.

[zjshen] YARN-3787. Allowed generic history service to load a number of 
applications whose started time is within the given range. Contributed by Xuan 
Gong.

--
[...truncated 7191 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.941 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.783 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.668 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.546 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.475 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.184 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.3 sec - in 
org.apache.hadoop.hdfs.TestDisableConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.986 sec - in 
org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.575 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.424 sec - in 
org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.492 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBlockReaderFactory
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.939 sec - in 
org.apache.hadoop.hdfs.TestBlockReaderFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Hadoop-Hdfs-trunk-Java8 - Build # 212 - Failure

2015-06-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/212/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7384 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 48.349 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:47 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.150 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:48 h
[INFO] Finished at: 2015-06-09T14:23:27+00:00
[INFO] Final Memory: 52M/175M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #211
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 807149 bytes
Compression is 0.0%
Took 25 sec
Recording test results
Updating YARN-3787
Updating HDFS-8552
Updating HADOOP-12073
Updating MAPREDUCE-6392
Updating YARN-3786
Updating HDFS-8553
Updating YARN-3778
Updating HADOOP-11347
Updating HDFS-8554
Updating YARN-2716
Updating HADOOP-12054
Updating MAPREDUCE-6388
Updating HADOOP-12052
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithEmptyClusterIdOption

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithEmptyClusterIdOption(TestClusterId.java:292)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithNoClusterIdOption

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithNoClusterIdOption(TestClusterId.java:265)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithInvalidClusterIdOption

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithInvalidClusterIdOption(TestClusterId.java:239)




Hadoop-Hdfs-trunk - Build # 2151 - Failure

2015-06-09 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2151/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6852 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 46.520 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:38 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.055 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:39 h
[INFO] Finished at: 2015-06-09T14:13:55+00:00
[INFO] Final Memory: 62M/699M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2150
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 362763 bytes
Compression is 0.0%
Took 10 sec
Recording test results
Updating YARN-3787
Updating HDFS-8552
Updating HADOOP-12073
Updating MAPREDUCE-6392
Updating YARN-3786
Updating HDFS-8553
Updating YARN-3778
Updating HADOOP-11347
Updating HDFS-8554
Updating YARN-2716
Updating HADOOP-12054
Updating MAPREDUCE-6388
Updating HADOOP-12052
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithEmptyClusterIdOption

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithEmptyClusterIdOption(TestClusterId.java:292)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithNoClusterIdOption

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithNoClusterIdOption(TestClusterId.java:265)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithInvalidClusterIdOption

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithInvalidClusterIdOption(TestClusterId.java:239)




[DISCUSS] Using maven-jarjar-plugin for avoiding classpath conflicts

2015-06-09 Thread Tsuyoshi Ozawa
Hi,

Recently, I've tackled with dependency problems about Guava, Jetty,
and Jersey. Essentially, it's similar to DLL hell.

I've seen that Google Guice uses jarjar-maven-plugin to avoid
classpath conflicts between user-side dependency and library-side
dependency.

http://sonatype.github.io/jarjar-maven-plugin/

It looks good to me, but it can break backward compatibility about
classpath. Can we use this plugin against Guava, Jetty, Jersey, and so
on? I believe it reduces efforts to keep compatibility about
dependencies once it's introduced. What do you think?

Thanks,
- Tsuyoshi


Re: [DISCUSS] Using maven-jarjar-plugin for avoiding classpath conflicts

2015-06-09 Thread Tsuyoshi Ozawa
Hi Andrew,

I haven't noticed that HADOOP-11656 covers renaming and repackaging
libraries. I'll check it.

Thanks,
- Tsuyoshi

On Tue, Jun 9, 2015 at 2:05 PM, Andrew Wang andrew.w...@cloudera.com wrote:
 Hi Tsuyoshi,

 I think Sean is already working on something similar at HADOOP-11656 with
 shading the hadoop client. Have you reviewed his proposal?

 Best,
 Andrew

 On Tue, Jun 9, 2015 at 12:17 PM, Tsuyoshi Ozawa oz...@apache.org wrote:

 Hi,

 Recently, I've tackled with dependency problems about Guava, Jetty,
 and Jersey. Essentially, it's similar to DLL hell.

 I've seen that Google Guice uses jarjar-maven-plugin to avoid
 classpath conflicts between user-side dependency and library-side
 dependency.

 http://sonatype.github.io/jarjar-maven-plugin/

 It looks good to me, but it can break backward compatibility about
 classpath. Can we use this plugin against Guava, Jetty, Jersey, and so
 on? I believe it reduces efforts to keep compatibility about
 dependencies once it's introduced. What do you think?

 Thanks,
 - Tsuyoshi



[jira] [Reopened] (HDFS-196) File length not reported correctly after application crash

2015-06-09 Thread Kevin Beyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Beyer reopened HDFS-196:
--

HDFS file length reported by ls may be less than the number of bytes found when 
reading.  I created the mismatched file by kill -9 during a copy so that the 
client doesn't shutdown its connection to the namenode properly.  This 
misreported length persisted after restarting hdfs.

{quote}
$ hdfs dfs -copyFromLocal junk17 /tmp/.
2015-06-09 13:09:25,742 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
^Z
[1]+  Stopped hdfs dfs -copyFromLocal junk17 /tmp/.
$ kill -9 %1

[1]+  Stopped hdfs dfs -copyFromLocal junk17 /tmp/.
$ fg
-bash: fg: job has terminated
[1]+  Killed: 9   hdfs dfs -copyFromLocal junk17 /tmp/.
$ hdfs dfs -ls /tmp
2015-06-09 13:09:45,730 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
Found 3 items
drwxrwx---   - jane supergroup  0 2015-05-28 14:26 /tmp/hadoop-yarn
drwx-wx-wx   - jane supergroup  0 2015-05-28 14:26 /tmp/hive
-rw-r--r--   1 jane supergroup 1073741824 2015-06-09 13:09 /tmp/junk17._COPYING_
$ hdfs dfs -ls /tmp
2015-06-09 13:09:55,345 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
Found 3 items
drwxrwx---   - jane supergroup  0 2015-05-28 14:26 /tmp/hadoop-yarn
drwx-wx-wx   - jane supergroup  0 2015-05-28 14:26 /tmp/hive
-rw-r--r--   1 jane supergroup 1073741824 2015-06-09 13:09 /tmp/junk17._COPYING_
$ hdfs dfs -cat /tmp/junk17._COPYING_ | wc -c
 1207959752
$ hdfs dfs -ls /tmp
2015-06-09 13:11:21,389 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
Found 3 items
drwxrwx---   - jane supergroup  0 2015-05-28 14:26 /tmp/hadoop-yarn
drwx-wx-wx   - jane supergroup  0 2015-05-28 14:26 /tmp/hive
-rw-r--r--   1 jane supergroup 1073741824 2015-06-09 13:09 /tmp/junk17._COPYING_
$ hdfs dfs -cp /tmp/junk17._COPYING_ /tmp/junk18
2015-06-09 13:13:38,963 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
$ hdfs dfs -ls /tmp
2015-06-09 13:13:45,575 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
Found 4 items
drwxrwx---   - jane supergroup  0 2015-05-28 14:26 /tmp/hadoop-yarn
drwx-wx-wx   - jane supergroup  0 2015-05-28 14:26 /tmp/hive
-rw-r--r--   1 jane supergroup 1073741824 2015-06-09 13:09 /tmp/junk17._COPYING_
-rw-r--r--   1 jane supergroup 1207959552 2015-06-09 13:13 /tmp/junk18
{quote}

{quote}
$ hdfs version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
{quote}

 File length not reported correctly after application crash
 --

 Key: HDFS-196
 URL: https://issues.apache.org/jira/browse/HDFS-196
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Doug Judd

 Our application (Hypertable) creates a transaction log in HDFS.  This log is 
 written with the following pattern:
 out_stream.write(header, 0, 7);
 out_stream.sync()
 out_stream.write(data, 0, amount);
 out_stream.sync()
 [...]
 However, if the application crashes and then comes back up again, the 
 following statement
 length = mFilesystem.getFileStatus(new Path(fileName)).getLen();
 returns the wrong length.  Apparently this is because this method fetches 
 length information from the NameNode which is stale.  Ideally, a call to 
 getFileStatus() would return the accurate file length by fetching the size of 
 the last block from the primary datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8568) TestClusterId is failing

2015-06-09 Thread Rakesh R (JIRA)
Rakesh R created HDFS-8568:
--

 Summary: TestClusterId is failing
 Key: HDFS-8568
 URL: https://issues.apache.org/jira/browse/HDFS-8568
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R


It fails with the below exception:

{code}
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.TestClusterId.testFormatWithEmptyClusterIdOption(TestClusterId.java:292)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8569) TestDeadDatanode#testDeadDatanode is failing

2015-06-09 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-8569:
--

 Summary: TestDeadDatanode#testDeadDatanode is failing
 Key: HDFS-8569
 URL: https://issues.apache.org/jira/browse/HDFS-8569
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula


 *Error Message* 

org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator

 *Stacktrace* 

java.lang.NoClassDefFoundError: 
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator
at 
org.apache.hadoop.util.IntrusiveCollection.iterator(IntrusiveCollection.java:213)
at 
org.apache.hadoop.util.IntrusiveCollection.clear(IntrusiveCollection.java:368)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.clearPendingCachingCommands(DatanodeManager.java:1581)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1235)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1557)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stopCommonServices(NameNode.java:704)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:868)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1752)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1721)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1714)
at 
org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.cleanup(TestDeadDatanode.java:59)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)