Build failed in Jenkins: Hadoop-Hdfs-trunk #2291

2015-09-10 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HDFS-8974. Convert docs in xdoc format to markdown. Contributed by 
Masatake Iwasaki.

--
[...truncated 6625 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.123 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeMetricsLogger
Running org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.094 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestDeduplicationMap
Running org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.479 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestAddBlock
Running org.apache.hadoop.hdfs.server.namenode.TestAllowFormat
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.161 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestAllowFormat
Running org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.595 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestHostsFiles
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.233 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestAuditLogAtDebug
Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.397 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.359 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Running org.apache.hadoop.hdfs.server.namenode.TestCommitBlockSynchronization
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.714 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestCommitBlockSynchronization
Running org.apache.hadoop.hdfs.server.namenode.TestFsLimits
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.351 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFsLimits
Running org.apache.hadoop.hdfs.server.namenode.TestEditLog
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.42 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestEditLog
Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.439 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
Running org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.686 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Running org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.978 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestCheckPointForSecurityTokens
Running org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.184 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSImageWithAcl
Running org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.955 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestAuditLogs
Running org.apache.hadoop.hdfs.server.namenode.TestBackupNode
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.846 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode
Running org.apache.hadoop.hdfs.server.namenode.TestSecureNameNode
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.072 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestSecureNameNode
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.14 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.353 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.161 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots
Running 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.614 sec - in 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestFileWithSnapshotFeature
Running org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.753 sec - in 

Hadoop-Hdfs-trunk - Build # 2291 - Still Failing

2015-09-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2291/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6818 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:04 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:07 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.077 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:11 h
[INFO] Finished at: 2015-09-10T10:25:28+00:00
[INFO] Final Memory: 72M/1123M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8344789739543358185.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire699104625016937962tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_49578401082502287262tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2288
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4568975 bytes
Compression is 0.0%
Took 3.2 sec
Recording test results
Updating HDFS-8974
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-1
  target length: 50
  current item: 46
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
 directory: /test-1
 target length: 50
 current item: 46
 done: false
] expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)




Re: Even after HDFS-2856 JSVC References are require..?

2015-09-10 Thread Steve Loughran
SASL authenticates the DN on Hadoop 2.6+, but it requires the clients to be 
using the 2.6+ JARs; you can't use it on the 2.2-2.5 artifacts. 

> On 9 Sep 2015, at 18:45, Allen Wittenauer  wrote:
> 
> 
> FWIW, I still use and prefer jsvc, esp with the sudo trick in place.
> 
> On Sep 9, 2015, at 9:35 AM, Chris Nauroth  wrote:
> 
>> AFAIK, the majority of existing deployments still use jsvc to run a
>> secured DataNode.  It would be a backwards-incompatible change to remove
>> support for this deployment model.  For that reason, I would be -1 for
>> removing jsvc support, at least in the 2.x line.
>> 
>> 
>> It's something that could be considered for 3.x if we think the clean-up
>> benefit outweighs the incompatibility cost.  Before we do that, I'd prefer
>> to hear if end users are having success with the SASL deployment model.
>> Brahma, are you asking because you run clusters with the SASL approach?
>> If so, has it been working well?
>> 
>> --Chris Nauroth
>> 
>> 
>> 
>> 
>> On 9/9/15, 9:25 AM, "Haohui Mai"  wrote:
>> 
>>> JSVC is no longer required. It causes a lot of headaches in
>>> deployments. It's definitely a good target for clean ups.
>>> 
>>> ~Haohui
>>> 
>>> On Wed, Sep 9, 2015 at 5:24 AM, Brahma Reddy Battula
>>>  wrote:
 Hi All,
 
 AFAIK JSVC added secure the block tokens(..?).
 
 Since block tokens are secure now (SASL used to secure the
 DataTransferProtocol, which transfers file block content between HDFS
 clients and DataNodes),then can we remove jsvc now (script files)..?
 
 
 
 Thanks & Regards
 
 Brahma Reddy Battula
 
 
 
>>> 
>> 
> 
> 



[jira] [Created] (HDFS-9043) Doc updation for commands in HDFS Federation

2015-09-10 Thread J.Andreina (JIRA)
J.Andreina created HDFS-9043:


 Summary: Doc updation for commands in HDFS Federation
 Key: HDFS-9043
 URL: https://issues.apache.org/jira/browse/HDFS-9043
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor


1. command is wrong 
{noformat}
 $HADOOP_PREFIX/bin/hdfs dfsadmin -refreshNameNode 
:
{noformat}
Correct command is : hdfs dfsadmin -refreshNameNode's'

2.command is wrong 
{noformat}
 $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script 
$HADOOP_PREFIX/bin/hdfs start balancer 
{noformat}
Correct command is : *hdfs balancer -policy*

3. Reference link to balancer for further details is wrong
{noformat}
Note that Balancer only balances the data and does not balance the namespace. 
For the complete command usage, see balancer.
{noformat}

4. URL is not proper 
{noformat}
Similar to the Namenode status web page, when using federation a Cluster Web 
Console is available to monitor the federated cluster at 
http:///dfsclusterhealth.jsp.
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9042) Update document for the Storage policy name

2015-09-10 Thread J.Andreina (JIRA)
J.Andreina created HDFS-9042:


 Summary: Update document for the Storage policy name
 Key: HDFS-9042
 URL: https://issues.apache.org/jira/browse/HDFS-9042
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: J.Andreina
Assignee: J.Andreina
Priority: Minor


Storage policy name :

Incorrect : "Lasy_Persist" 
Correct   : "Lazy_Persist" 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #352

2015-09-10 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HDFS-8974. Convert docs in xdoc format to markdown. Contributed by 
Masatake Iwasaki.

--
[...truncated 7817 lines...]
at org.junit.Assert.assertNull(Assert.java:646)
at org.junit.Assert.assertNull(Assert.java:656)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.checkMxBeanIsNull(TestRollingUpgrade.java:293)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testRollback(TestRollingUpgrade.java:322)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.119 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.582 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.828 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.978 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.974 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.621 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.841 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.359 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.633 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.46 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.409 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.477 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.43 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.896 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0

Hadoop-Hdfs-trunk-Java8 - Build # 352 - Still Failing

2015-09-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/352/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8010 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:18 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:47 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.113 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:51 h
[INFO] Finished at: 2015-09-10T12:04:27+00:00
[INFO] Final Memory: 55M/600M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5369428 bytes
Compression is 0.0%
Took 4.7 sec
Recording test results
Updating HDFS-8974
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1848)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1835)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1828)
at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.shutdown(MiniQJMHACluster.java:160)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpoint(TestRollingUpgrade.java:601)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN(TestRollingUpgrade.java:565)


REGRESSION:  
org.apache.hadoop.hdfs.TestRollingUpgrade.testDFSAdminRollingUpgradeCommands

Error Message:
expected null, but 

[jira] [Created] (HDFS-9044) If favored nodes are not available, then should fall back to regular BlockPlacementPolicyWithNodeGroup

2015-09-10 Thread J.Andreina (JIRA)
J.Andreina created HDFS-9044:


 Summary: If favored nodes are not available, then should fall back 
to regular BlockPlacementPolicyWithNodeGroup
 Key: HDFS-9044
 URL: https://issues.apache.org/jira/browse/HDFS-9044
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: J.Andreina
Assignee: J.Andreina


Current behavior:

While choosing targets for replicas if favored nodes are not available , then 
it goes for choosing the random node from the favored node group instead of 
falling back to regular BlockPlacementPolicyWithNodeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9045) DatanodeHttpServer is not setting Endpoint based on configured policy and not loading ssl configuration.

2015-09-10 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created HDFS-9045:
--

 Summary: DatanodeHttpServer is not setting Endpoint based on 
configured policy and not loading ssl configuration.
 Key: HDFS-9045
 URL: https://issues.apache.org/jira/browse/HDFS-9045
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Surendra Singh Lilhore
Priority: Critical


Always DN is starting in http mode.
{code}
HttpServer2.Builder builder = new HttpServer2.Builder()
.setName("datanode")
.setConf(confForInfoServer)
.setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
.hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
.addEndpoint(URI.create("http://localhost:0;))
.setFindPort(true);
Should be based on configured policy

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9046) Any Error during BPOfferService run can leads to Missing DN.

2015-09-10 Thread nijel (JIRA)
nijel created HDFS-9046:
---

 Summary: Any Error during BPOfferService run can leads to Missing 
DN.
 Key: HDFS-9046
 URL: https://issues.apache.org/jira/browse/HDFS-9046
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: nijel
Assignee: nijel


The cluster is ins HA mode and each DN having only one block pool.

The issue is once after switch one DN is missing from the current active NN.
Upon analysis I found that there is one exception in BPOfferService.run()

{noformat}
2015-08-21 09:02:11,190 | WARN  | DataNode: 
[[[DISK]file:/srv/BigData/hadoop/data5/dn/ 
[DISK]file:/srv/BigData/hadoop/data4/dn/]]  heartbeating to 
160-149-0-114/160.149.0.114:25000 | Unexpected exception in block pool Block 
pool BP-284203724-160.149.0.114-1438774011693 (Datanode Uuid 
15ce1dd7-227f-4fd2-9682-091aa6bc2b89) service to 
160-149-0-114/160.149.0.114:25000 | BPServiceActor.java:830
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1357)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.execute(FsDatasetAsyncDiskService.java:172)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService.deleteAsync(FsDatasetAsyncDiskService.java:221)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(FsDatasetImpl.java:1887)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActive(BPOfferService.java:669)
at 
org.apache.hadoop.hdfs.server.datanode.BPOfferService.processCommandFromActor(BPOfferService.java:616)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.processCommand(BPServiceActor.java:856)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:671)
at 
org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
at java.lang.Thread.run(Thread.java:745)
{noformat}
After this particular BPOfferService is down during the run time.
And this particular NN will not have the details of this DN

Similar issues are discussed in the following JIRAs
https://issues.apache.org/jira/browse/HDFS-2882
https://issues.apache.org/jira/browse/HDFS-7714

Can we retry in this case also with a larger interval instead of shutting down 
this BPOfferService ?
I think since this exceptions can occur randomly in DN it is not good to keep 
the DN running where some NN does not have the info !




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Even after HDFS-2856 JSVC References are require..?

2015-09-10 Thread Chris Nauroth
Yes, I have a paragraph in the docs describing how someone would go about
migrating a jsvc-based deployment to a SASL-based deployment.

http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/Secu
reMode.html#Secure_DataNode


It's a non-trivial operation that starts by making sure everyone is on 2.6
first.  This includes client deployments, which are notoriously more
difficult to control than server deployments.

--Chris Nauroth




On 9/10/15, 1:21 AM, "Steve Loughran"  wrote:

>SASL authenticates the DN on Hadoop 2.6+, but it requires the clients to
>be using the 2.6+ JARs; you can't use it on the 2.2-2.5 artifacts.
>
>> On 9 Sep 2015, at 18:45, Allen Wittenauer  wrote:
>> 
>> 
>> FWIW, I still use and prefer jsvc, esp with the sudo trick in place.
>> 
>> On Sep 9, 2015, at 9:35 AM, Chris Nauroth 
>>wrote:
>> 
>>> AFAIK, the majority of existing deployments still use jsvc to run a
>>> secured DataNode.  It would be a backwards-incompatible change to
>>>remove
>>> support for this deployment model.  For that reason, I would be -1 for
>>> removing jsvc support, at least in the 2.x line.
>>> 
>>> 
>>> It's something that could be considered for 3.x if we think the
>>>clean-up
>>> benefit outweighs the incompatibility cost.  Before we do that, I'd
>>>prefer
>>> to hear if end users are having success with the SASL deployment model.
>>> Brahma, are you asking because you run clusters with the SASL approach?
>>> If so, has it been working well?
>>> 
>>> --Chris Nauroth
>>> 
>>> 
>>> 
>>> 
>>> On 9/9/15, 9:25 AM, "Haohui Mai"  wrote:
>>> 
 JSVC is no longer required. It causes a lot of headaches in
 deployments. It's definitely a good target for clean ups.
 
 ~Haohui
 
 On Wed, Sep 9, 2015 at 5:24 AM, Brahma Reddy Battula
  wrote:
> Hi All,
> 
> AFAIK JSVC added secure the block tokens(..?).
> 
> Since block tokens are secure now (SASL used to secure the
> DataTransferProtocol, which transfers file block content between HDFS
> clients and DataNodes),then can we remove jsvc now (script files)..?
> 
> 
> 
> Thanks & Regards
> 
> Brahma Reddy Battula
> 
> 
> 
 
>>> 
>> 
>> 
>
>



[jira] [Created] (HDFS-9048) DistCp documentation is out-of-dated

2015-09-10 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-9048:


 Summary: DistCp documentation is out-of-dated
 Key: HDFS-9048
 URL: https://issues.apache.org/jira/browse/HDFS-9048
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai


There are a couple issues with the current distcp document:

* It recommends hftp / hsftp filesystem to copy data between different hadoop 
version. hftp / hsftp have been deprecated in the flavor of webhdfs.
* If the users are copying between Hadoop 2.x they can use the hdfs protocol 
directly for better performance.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #353

2015-09-10 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-6763. Initialize file system-wide quota once on transitioning to 
active. Contributed by Kihwal Lee

--
[...truncated 7829 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.202 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.266 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.203 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.237 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.082 sec - in 
org.apache.hadoop.hdfs.util.TestExactSizeInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.619 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.714 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.087 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.836 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.008 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.032 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.487 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 112.26 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.649 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.843 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.066 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: 

Hadoop-Hdfs-trunk-Java8 - Build # 353 - Still Failing

2015-09-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/353/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8022 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:18 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:55 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.123 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:58 h
[INFO] Finished at: 2015-09-10T17:46:50+00:00
[INFO] Final Memory: 55M/659M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5371285 bytes
Compression is 0.0%
Took 3.8 sec
Recording test results
Updating HDFS-6763
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure$SlowWriter.checkReplication(TestReplaceDatanodeOnFailure.java:235)
at 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure(TestReplaceDatanodeOnFailure.java:154)


REGRESSION:  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack

Error Message:

BlockCollection$$EnhancerByMockitoWithCGLIB$$62cb02d5 cannot be returned by 
isRunning()
isRunning() should return boolean

Stack Trace:
org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
BlockCollection$$EnhancerByMockitoWithCGLIB$$62cb02d5 cannot be returned by 
isRunning()
isRunning() should return boolean
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:443)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSingleRackClusterIsSufficientlyReplicated(TestBlockManager.java:376)
at 

2.6.1 follow up activities [Was Re: [VOTE] Release Apache Hadoop 2.6.1 RC0]

2015-09-10 Thread Vinod Kumar Vavilapalli
Forking thread for these follow up activities.

One more thing to do - Updating CHANGES.txt entries to reflect the patch-move 
up to 2.6.1

Thanks
+Vinod


On Sep 9, 2015, at 6:00 PM, Vinod Kumar Vavilapalli 
> wrote:

- Note that branch-2.6 which will be the base for 2.6.2 doesn't have these
fixes yet. Once 2.6.1 goes through, I plan to rebase branch-2.6 based off
2.6.1.
- Patches that got into 2.6.1 all the way from 2.8 are NOT in 2.7.2 yet,
this will be done as a followup.



Re: [VOTE] Release Apache Hadoop 2.6.1 RC0

2015-09-10 Thread Allen Wittenauer

On Sep 9, 2015, at 6:00 PM, Vinod Kumar Vavilapalli  wrote:

> - Note that branch-2.6 which will be the base for 2.6.2 doesn't have these
> fixes yet. Once 2.6.1 goes through, I plan to rebase branch-2.6 based off
> 2.6.1.

Isn’t there a risk that there are things in branch-2.6 that aren’t in 
2.6.1 then?



Build failed in Jenkins: Hadoop-Hdfs-trunk #2292

2015-09-10 Thread Apache Jenkins Server
See 

Changes:

[kihwal] HDFS-6763. Initialize file system-wide quota once on transitioning to 
active. Contributed by Kihwal Lee

--
[...truncated 8451 lines...]
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at org.apache.hadoop.cli.TestAclCLI.tearDown(TestAclCLI.java:49)

Running org.apache.hadoop.cli.TestDeleteCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 6.947 sec <<< 
FAILURE! - in org.apache.hadoop.cli.TestDeleteCLI
testAll(org.apache.hadoop.cli.TestDeleteCLI)  Time elapsed: 6.734 sec  <<< 
ERROR!
java.lang.NoSuchMethodError: 
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
at org.apache.hadoop.cli.TestDeleteCLI.execute(TestDeleteCLI.java:84)

testAll(org.apache.hadoop.cli.TestDeleteCLI)  Time elapsed: 6.735 sec  <<< 
FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to 
identify the command that failed
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at org.apache.hadoop.cli.TestDeleteCLI.tearDown(TestDeleteCLI.java:66)

Running org.apache.hadoop.cli.TestXAttrCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 7.027 sec <<< 
FAILURE! - in org.apache.hadoop.cli.TestXAttrCLI
testAll(org.apache.hadoop.cli.TestXAttrCLI)  Time elapsed: 6.811 sec  <<< ERROR!
java.lang.NoSuchMethodError: 
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
at org.apache.hadoop.cli.TestXAttrCLI.execute(TestXAttrCLI.java:90)

testAll(org.apache.hadoop.cli.TestXAttrCLI)  Time elapsed: 6.812 sec  <<< 
FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to 
identify the command that failed
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at org.apache.hadoop.cli.TestXAttrCLI.tearDown(TestXAttrCLI.java:75)

Running org.apache.hadoop.cli.TestCacheAdminCLI
Tests run: 2, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 7.393 sec <<< 
FAILURE! - in org.apache.hadoop.cli.TestCacheAdminCLI
testAll(org.apache.hadoop.cli.TestCacheAdminCLI)  Time elapsed: 7.185 sec  <<< 
ERROR!
java.lang.NoSuchMethodError: 
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
at 
org.apache.hadoop.cli.TestCacheAdminCLI.execute(TestCacheAdminCLI.java:134)

testAll(org.apache.hadoop.cli.TestCacheAdminCLI)  Time elapsed: 7.186 sec  <<< 
FAILURE!
java.lang.AssertionError: One of the tests failed. See the Detailed results to 
identify the command that failed
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at 
org.apache.hadoop.cli.TestCacheAdminCLI.tearDown(TestCacheAdminCLI.java:84)

Running org.apache.hadoop.TestRefreshCallQueue
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.262 sec - in 
org.apache.hadoop.TestRefreshCallQueue
Running org.apache.hadoop.fs.TestXAttr
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.091 sec - in 
org.apache.hadoop.fs.TestXAttr
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.397 sec - 
in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.886 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.446 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.353 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 

Hadoop-Hdfs-trunk - Build # 2292 - Still Failing

2015-09-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2292/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8644 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:14 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:11 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.087 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:15 h
[INFO] Finished at: 2015-09-10T18:04:25+00:00
[INFO] Final Memory: 56M/728M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2288
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4568665 bytes
Compression is 0.0%
Took 5.3 sec
Recording test results
Updating HDFS-6763
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
24 tests failed.
REGRESSION:  org.apache.hadoop.cli.TestAclCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: 
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
at org.apache.hadoop.cli.TestAclCLI.execute(TestAclCLI.java:76)


REGRESSION:  org.apache.hadoop.cli.TestAclCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to 
identify the command that failed
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at org.apache.hadoop.cli.TestAclCLI.tearDown(TestAclCLI.java:49)


REGRESSION:  org.apache.hadoop.cli.TestCacheAdminCLI.testAll

Error Message:
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;

Stack Trace:
java.lang.NoSuchMethodError: 
org.apache.hadoop.cli.util.CLICommand.getExecutor(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/hadoop/cli/util/CommandExecutor;
  

[jira] [Resolved] (HDFS-9045) DatanodeHttpServer is not setting Endpoint based on configured policy and not loading ssl configuration.

2015-09-10 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-9045.
--
Resolution: Invalid

This is invalid as the Netty serves as a reverse proxy for the Jetty server. 
The jetty server is supposed to listen to localhost only.

> DatanodeHttpServer is not setting Endpoint based on configured policy and not 
> loading ssl configuration.
> 
>
> Key: HDFS-9045
> URL: https://issues.apache.org/jira/browse/HDFS-9045
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Surendra Singh Lilhore
>Priority: Critical
>
> Always DN is starting in http mode.
> {code}
> HttpServer2.Builder builder = new HttpServer2.Builder()
> .setName("datanode")
> .setConf(confForInfoServer)
> .setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
> .hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
> .addEndpoint(URI.create("http://localhost:0;))
> .setFindPort(true);
> {code}
> Should be based on configured policy



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.1 RC0

2015-09-10 Thread Vinod Kumar Vavilapalli
When I meant rebase, I meant “start with 2.6.1 + add patches that are already 
in branch-2.6”.

The other way around “apply all 2.6.1 patches on today’s 2.6” is very hard for 
me. I just came out of a very long process of 153 cherry-pick+merges+test, 
cannot do that again.

Thanks
+Vinod


> On Sep 10, 2015, at 10:55 AM, Allen Wittenauer  wrote:
> 
> 
> On Sep 9, 2015, at 6:00 PM, Vinod Kumar Vavilapalli  
> wrote:
> 
>> - Note that branch-2.6 which will be the base for 2.6.2 doesn't have these
>> fixes yet. Once 2.6.1 goes through, I plan to rebase branch-2.6 based off
>> 2.6.1.
> 
>   Isn’t there a risk that there are things in branch-2.6 that aren’t in 
> 2.6.1 then?
> 
> 



[jira] [Created] (HDFS-9049) Make Datanode Netty reverse proxy port to be configurable

2015-09-10 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-9049:
---

 Summary: Make Datanode Netty reverse proxy port to be configurable
 Key: HDFS-9049
 URL: https://issues.apache.org/jira/browse/HDFS-9049
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Vinayakumar B
Assignee: Vinayakumar B


In DatanodeHttpServer.java Netty is used as reverse proxy. But uses random port 
to start with binding to localhost. This port can be made configurable for 
better deployments.
{code}
 HttpServer2.Builder builder = new HttpServer2.Builder()
.setName("datanode")
.setConf(confForInfoServer)
.setACL(new AccessControlList(conf.get(DFS_ADMIN, " ")))
.hostName(getHostnameForSpnegoPrincipal(confForInfoServer))
.addEndpoint(URI.create("http://localhost:0;))
.setFindPort(true);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2294

2015-09-10 Thread Apache Jenkins Server
See 

Changes:

[vinodkv] Updating all CHANGES.txt files to move entires from future releases 
into 2.6.1

--
[...truncated 7270 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.77 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.791 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.27 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestRemoteBlockReader2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.542 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader2
Running org.apache.hadoop.hdfs.TestSetrepDecreasing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.802 sec - in 
org.apache.hadoop.hdfs.TestSetrepDecreasing
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.266 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.528 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.228 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.194 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.513 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.162 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 16, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 125.033 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.108 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.167 sec - in 
org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.529 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.463 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.084 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.285 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.649 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.934 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.752 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.503 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.295 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.646 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.989 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.859 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 9.501 sec - in 
org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running 

Hadoop-Hdfs-trunk - Build # 2294 - Still Failing

2015-09-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2294/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7463 lines...]
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:16 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:43 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.059 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:46 h
[INFO] Finished at: 2015-09-11T01:53:26+00:00
[INFO] Final Memory: 58M/668M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2288
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4568655 bytes
Compression is 0.0%
Took 3.5 sec
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure$SlowWriter.checkReplication(TestReplaceDatanodeOnFailure.java:235)
at 
org.apache.hadoop.hdfs.TestReplaceDatanodeOnFailure.testReplaceDatanodeOnFailure(TestReplaceDatanodeOnFailure.java:154)


REGRESSION:  org.apache.hadoop.hdfs.web.TestWebHDFS.testLargeDirectory

Error Message:
Read timed out

Stack Trace:
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:687)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:633)
at 

Re: [VOTE] Release Apache Hadoop 2.6.1 RC0

2015-09-10 Thread Ted Yu
I pointed master branch of hbase to 2.6.1 RC0.
Ran unit test suite and results are good.

Cheers

On Thu, Sep 10, 2015 at 5:16 PM, Sangjin Lee  wrote:

> I verified the signatures for both source and the binary tarballs. I
> started up a pseudo-distributed cluster, and tested simple apps such as
> sleep and terasort.
>
> I do see one issue with the RM UI where the sorting by id is broken. The
> table is not rendered in the expected id-descending order, and when I click
> the sort control, nothing happens. Sorting by other columns works fine.
>
> Is anyone else able to reproduce the issue? I checked 2.6.0, and it works
> fine on 2.6.0.
>
> On Wed, Sep 9, 2015 at 6:00 PM, Vinod Kumar Vavilapalli <
> vino...@apache.org>
> wrote:
>
> > Hi all,
> >
> > After a nearly month long [1] toil, with loads of help from Sangjin Lee
> > and Akira Ajisaka, and 153 commits later, I've created a release
> candidate
> > RC0 for hadoop-2.6.1.
> >
> > The RC is available at:
> > http://people.apache.org/~vinodkv/hadoop-2.6.1-RC0/
> >
> > The RC tag in git is: release-2.6.1-RC0
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1020
> >
> > Some notes from our release process
> >  -  - Sangjin and I moved out a bunch of items pending from 2.6.1 [2] -
> > non-committed but desired patches. 2.6.1 is already big as is and is late
> > by any standard, we can definitely include them in the next release.
> >  - The 2.6.1 wiki page [3] captures some (but not all) of the context of
> > the patches that we pushed in.
> >  - Given the number of fixes pushed [4] in, we had to make a bunch of
> > changes to our original plan - we added a few improvements that helped us
> > backport patches easier (or in many cases made backports possible), and
> we
> > dropped a few that didn't make sense (HDFS-7831, HDFS-7926, HDFS-7676,
> > HDFS-7611, HDFS-7843, HDFS-8850).
> >  - I ran all the unit tests which (surprisingly?) passed. (Except for
> one,
> > which pointed out a missing fix HDFS-7552).
> >
> > As discussed before [5]
> >  - This release is the first point release after 2.6.0
> >  - I’d like to use this as a starting release for 2.6.2 in a few weeks
> > and then follow up with more of these.
> >
> > Please try the release and vote; the vote will run for the usual 5 days.
> >
> > Thanks,
> > Vinod
> >
> > [1] Hadoop 2.6.1 Release process thread:
> > http://markmail.org/thread/wkbgkxkhntx5tlux
> > [2] 2.6.1 Pending tickets:
> > https://issues.apache.org/jira/issues/?filter=12331711
> > [3] 2.6.1 Wiki page:
> > https://wiki.apache.org/hadoop/Release-2.6.1-Working-Notes
> > [4] List of 2.6.1 patches pushed:
> >
> https://issues.apache.org/jira/issues/?jql=fixVersion%20%3D%202.6.1%20and%20labels%20%3D%20%222.6.1-candidate%22
> > [5] Planning Hadoop 2.6.1 release:
> > http://markmail.org/thread/sbykjn5xgnksh6wg
> >
> > PS:
> >  - Note that branch-2.6 which will be the base for 2.6.2 doesn't have
> > these fixes yet. Once 2.6.1 goes through, I plan to rebase branch-2.6
> based
> > off 2.6.1.
> >  - Patches that got into 2.6.1 all the way from 2.8 are NOT in 2.7.2 yet,
> > this will be done as a followup.
> >
> >
>


Re: [VOTE] Release Apache Hadoop 2.6.1 RC0

2015-09-10 Thread Sangjin Lee
I verified the signatures for both source and the binary tarballs. I
started up a pseudo-distributed cluster, and tested simple apps such as
sleep and terasort.

I do see one issue with the RM UI where the sorting by id is broken. The
table is not rendered in the expected id-descending order, and when I click
the sort control, nothing happens. Sorting by other columns works fine.

Is anyone else able to reproduce the issue? I checked 2.6.0, and it works
fine on 2.6.0.

On Wed, Sep 9, 2015 at 6:00 PM, Vinod Kumar Vavilapalli 
wrote:

> Hi all,
>
> After a nearly month long [1] toil, with loads of help from Sangjin Lee
> and Akira Ajisaka, and 153 commits later, I've created a release candidate
> RC0 for hadoop-2.6.1.
>
> The RC is available at:
> http://people.apache.org/~vinodkv/hadoop-2.6.1-RC0/
>
> The RC tag in git is: release-2.6.1-RC0
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1020
>
> Some notes from our release process
>  -  - Sangjin and I moved out a bunch of items pending from 2.6.1 [2] -
> non-committed but desired patches. 2.6.1 is already big as is and is late
> by any standard, we can definitely include them in the next release.
>  - The 2.6.1 wiki page [3] captures some (but not all) of the context of
> the patches that we pushed in.
>  - Given the number of fixes pushed [4] in, we had to make a bunch of
> changes to our original plan - we added a few improvements that helped us
> backport patches easier (or in many cases made backports possible), and we
> dropped a few that didn't make sense (HDFS-7831, HDFS-7926, HDFS-7676,
> HDFS-7611, HDFS-7843, HDFS-8850).
>  - I ran all the unit tests which (surprisingly?) passed. (Except for one,
> which pointed out a missing fix HDFS-7552).
>
> As discussed before [5]
>  - This release is the first point release after 2.6.0
>  - I’d like to use this as a starting release for 2.6.2 in a few weeks
> and then follow up with more of these.
>
> Please try the release and vote; the vote will run for the usual 5 days.
>
> Thanks,
> Vinod
>
> [1] Hadoop 2.6.1 Release process thread:
> http://markmail.org/thread/wkbgkxkhntx5tlux
> [2] 2.6.1 Pending tickets:
> https://issues.apache.org/jira/issues/?filter=12331711
> [3] 2.6.1 Wiki page:
> https://wiki.apache.org/hadoop/Release-2.6.1-Working-Notes
> [4] List of 2.6.1 patches pushed:
> https://issues.apache.org/jira/issues/?jql=fixVersion%20%3D%202.6.1%20and%20labels%20%3D%20%222.6.1-candidate%22
> [5] Planning Hadoop 2.6.1 release:
> http://markmail.org/thread/sbykjn5xgnksh6wg
>
> PS:
>  - Note that branch-2.6 which will be the base for 2.6.2 doesn't have
> these fixes yet. Once 2.6.1 goes through, I plan to rebase branch-2.6 based
> off 2.6.1.
>  - Patches that got into 2.6.1 all the way from 2.8 are NOT in 2.7.2 yet,
> this will be done as a followup.
>
>


Hadoop-Hdfs-trunk-Java8 - Build # 355 - Still Failing

2015-09-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/355/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7633 lines...]
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [06:52 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:23 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.060 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:30 h
[INFO] Finished at: 2015-09-11T01:33:38+00:00
[INFO] Final Memory: 75M/944M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter1326242586385330598.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire3767332761354031429tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_208839337214404426tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 5370156 bytes
Compression is 0.0%
Took 6.6 sec
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.testBlocksAreNotUnderreplicatedInSingleRack

Error Message:

BlockCollection$$EnhancerByMockitoWithCGLIB$$54fea779 cannot be returned by 
isRunning()
isRunning() should return boolean

Stack Trace:
org.mockito.exceptions.misusing.WrongTypeOfReturnValue: 
BlockCollection$$EnhancerByMockitoWithCGLIB$$54fea779 cannot be returned by 
isRunning()
isRunning() should return boolean
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.addBlockOnNodes(TestBlockManager.java:443)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockManager.doTestSingleRackClusterIsSufficientlyReplicated(TestBlockManager.java:376)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #355

2015-09-10 Thread Apache Jenkins Server
See 

Changes:

[vinodkv] Updating all CHANGES.txt files to move entires from future releases 
into 2.6.1

--
[...truncated 7440 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.627 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeUUID
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.496 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.268 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.578 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 97.654 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.897 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.563 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestStartSecureDataNode
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.104 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestStartSecureDataNode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeExit
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.932 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeExit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataXceiverLazyPersistHint
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.026 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataXceiverLazyPersistHint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.526 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBpServiceActorScheduler
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBpServiceActorScheduler
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.687 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.749 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.466 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Re: 2.6.1 follow up activities [Was Re: [VOTE] Release Apache Hadoop 2.6.1 RC0]

2015-09-10 Thread Vinod Kumar Vavilapalli
I’ve finished this one too. Trunk, branch-2, branch-2.7 and branch-2.6 reflect 
the backports. I haven’t touched the point branches like 2.7.1.

Thanks
+Vinod


On Sep 10, 2015, at 10:48 AM, Vinod Kumar Vavilapalli 
> wrote:

Updating CHANGES.txt entries to reflect the patch-move up to 2.6.1



Re: 2.6.1 follow up activities [Was Re: [VOTE] Release Apache Hadoop 2.6.1 RC0]

2015-09-10 Thread Vinod Kumar Vavilapalli
I have just finished this.

Thanks
+Vinod

On Sep 10, 2015, at 10:48 AM, Vinod Kumar Vavilapalli 
> wrote:

- Patches that got into 2.6.1 all the way from 2.8 are NOT in 2.7.2 yet,
this will be done as a followup.



[jira] [Created] (HDFS-9050) updatePipeline RPC call should only take new GS as input

2015-09-10 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-9050:
---

 Summary: updatePipeline RPC call should only take new GS as input
 Key: HDFS-9050
 URL: https://issues.apache.org/jira/browse/HDFS-9050
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.1
Reporter: Zhe Zhang
Assignee: Zhe Zhang


The only usage of the call is in {{DataStreamer#updatePipeline}}, where 
{{newBlock}} differs from current {{block}} only in GS.

Basically the RPC call is not supposed to update the {{poolID}}, {{ID}}, and 
{{numBytes}} of the block on NN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9051) webhdfs should support recursive list

2015-09-10 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-9051:
--

 Summary: webhdfs should support recursive list
 Key: HDFS-9051
 URL: https://issues.apache.org/jira/browse/HDFS-9051
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Reporter: Allen Wittenauer


There currently doesn't appear to be a way to recursive list a directory via 
webhdfs without making an individual liststatus call per dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9052) deleteSnapshot runs into AssertionError

2015-09-10 Thread Alex Ivanov (JIRA)
Alex Ivanov created HDFS-9052:
-

 Summary: deleteSnapshot runs into AssertionError
 Key: HDFS-9052
 URL: https://issues.apache.org/jira/browse/HDFS-9052
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Alex Ivanov


CDH 5.0.5 upgraded from CDH 5.0.0 (Hadoop 2.3)

Upon deleting a snapshot, we run into the following assertion error. The 
scenario is as follows:
1. We have a program that deletes snapshots in reverse chronological order.
2. The program deletes a couple of hundred snapshots successfully but runs into 
the following exception:
java.lang.AssertionError: Element already exists: 
element=useraction.log.crypto, DELETED=[useraction.log.crypto]
3. There seems to be an issue with that snapshot, which causes a file, which 
normally gets overwritten in every snapshot to be added to the SnapshotDiff 
delete queue twice.
4. Once the deleteSnapshot is run on the problematic snapshot, if the Namenode 
is restarted, it cannot be started again until the transaction is removed from 
the EditLog.
5. Sometimes the bad snapshot can be deleted but the prior snapshot seems to 
"inherit" the same issue.
6. The error below is from Namenode starting when the DELETE_SNAPSHOT 
transaction is replayed from the EditLog.

2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
(BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
updated: 10.52.209.77:1004 is added to 
blk_1080833995_7093259{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-16de62e5-f6e2-4ea7-aad9-f8567bded7d7:NORMAL|FINALIZED]]}
 size 0
2015-09-01 22:59:59,140 INFO  [IPC Server handler 0 on 8022] BlockStateChange 
(BlockManager.java:logAddStoredBlock(2342)) - BLOCK* addStoredBlock: blockMap 
updated: 10.52.209.77:1004 is added to 
blk_1080833996_7093260{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-1def2b07-d87f-49dd-b14f-ef230342088d:NORMAL|FINALIZED]]}
 size 0
2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
Encountered exception on operation DeleteSnapshotOp 
[snapshotRoot=/data/tenants/pdx-svt.baseline84/wddata, 
snapshotName=s2015022614_maintainer_soft_del, 
RpcClientId=7942c957-a7cf-44c1-880d-6eea690e1b19, RpcCallId=1]
2015-09-01 22:59:59,141 ERROR [IPC Server handler 0 on 8022] 
namenode.FSEditLogLoader (FSEditLogLoader.java:loadEditRecords(232)) - 
Encountered exception on operation DeleteSnapshotOp 
[snapshotRoot=/data/tenants/pdx-svt.baseline84/wddata, 
snapshotName=s2015022614_maintainer_soft_del, 
RpcClientId=7942c957-a7cf-44c1-880d-6eea690e1b19, RpcCallId=1]
java.lang.AssertionError: Element already exists: 
element=useraction.log.crypto, DELETED=[useraction.log.crypto]
at org.apache.hadoop.hdfs.util.Diff.insert(Diff.java:193)
at org.apache.hadoop.hdfs.util.Diff.delete(Diff.java:239)
at org.apache.hadoop.hdfs.util.Diff.combinePosterior(Diff.java:462)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.initChildren(DirectoryWithSnapshotFeature.java:293)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature$DirectoryDiff$2.iterator(DirectoryWithSnapshotFeature.java:303)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDeletedINode(DirectoryWithSnapshotFeature.java:531)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:823)
at 
org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:714)
at 
org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtreeRecursively(INodeDirectory.java:684)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature.cleanDirectory(DirectoryWithSnapshotFeature.java:830)
at 
org.apache.hadoop.hdfs.server.namenode.INodeDirectory.cleanSubtree(INodeDirectory.java:714)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.INodeDirectorySnapshottable.removeSnapshot(INodeDirectorySnapshottable.java:341)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.SnapshotManager.deleteSnapshot(SnapshotManager.java:238)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:667)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:224)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:133)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:802)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:783)



--
This message was sent by Atlassian JIRA

Build failed in Jenkins: Hadoop-Hdfs-trunk #2293

2015-09-10 Thread Apache Jenkins Server
See 

Changes:

[jlowe] MAPREDUCE-6474. ShuffleHandler can possibly exhaust nodemanager file 
descriptors. Contributed by Kuhu Shukla

[wangda] YARN-4106. NodeLabels for NM in distributed mode is not updated even 
after clusterNodelabel addition in RM. (Bibin A Chundatt via wangda)

--
[...truncated 7243 lines...]
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.694 sec - in 
org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.584 sec - 
in org.apache.hadoop.hdfs.TestBlockReaderLocal
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.653 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.352 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.827 sec - in 
org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestModTime
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.116 sec - in 
org.apache.hadoop.hdfs.TestModTime
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.138 sec - 
in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.666 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.295 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.924 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.453 sec - in 
org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.084 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.302 sec - in 
org.apache.hadoop.hdfs.TestSeekBug
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.414 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.064 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.9 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.548 sec - in 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.952 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.083 sec - in 
org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.093 sec - 
in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.943 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
blockLengthHintIsPropagated(org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint)
  Time elapsed: 2.818 sec  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/apache/hadoop/security/authentication/server/AuthenticationFilter
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at 

Hadoop-Hdfs-trunk - Build # 2293 - Still Failing

2015-09-10 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2293/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7436 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:27 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:41 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.108 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:44 h
[INFO] Finished at: 2015-09-10T21:20:47+00:00
[INFO] Final Memory: 71M/817M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter3799858672795988080.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire3054192571228147648tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_3006729246193393403922tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2288
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4568697 bytes
Compression is 0.0%
Took 3 sec
Recording test results
Updating MAPREDUCE-6474
Updating YARN-4106
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
7 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestFileStatus.testGetFileStatusOnDir

Error Message:
org/apache/hadoop/ipc/ProtobufHelper

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/ipc/ProtobufHelper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at