Re: hadoop-hdfs-client splitoff is going to break code

2015-10-20 Thread Steve Loughran

> On 19 Oct 2015, at 22:01, Colin P. McCabe  wrote:
> 
> Thanks for being proactive here, Steve.

no, just building downstream things. Caught a failure of spark to build against 
trunk too, but that's a one liner to import the no-deprecated Auth Exception

>  I think this is a good example of
> why this change should have been done in a branch rather than having been
> done directly in trunk.

Given the size of the change, I'm now convincedt that yes, the hadoop-client 
split should have been in a branch. What it offers there is the ability to 
choose when to merge in. As it is, any Hadoop 2.8 release will have this 
feature. It's going to be visible, and that's going to add more testing. We 
should expect this to cause things to surface in the release process. We also 
need to consider what's going to be the policy if 2.8.0 turns out to break 
something: what are we prepared to roll back?



Hadoop-Hdfs-trunk-Java8 - Build # 516 - Still Failing

2015-10-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/516/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7634 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:29 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:56 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.075 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:00 h
[INFO] Finished at: 2015-10-20T06:15:28+00:00
[INFO] Final Memory: 55M/722M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter4467853859780448322.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire864167644954754504tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4391408018836039317013tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-12493
Updating HDFS-9250
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
6 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancer

Error Message:
File /tmp.txt could only be replicated to 0 nodes instead of minReplication 
(=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this 
operation.
 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1734)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:298)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2448)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:730)
 at 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2454

2015-10-20 Thread Apache Jenkins Server
See 

Changes:

[Arun Suresh] YARN-4270. Limit application resource reservation on nodes for

[yliu] HDFS-9208. Disabling atime may fail clients like distCp. (Kihwal Lee via

--
[...truncated 6736 lines...]
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.365 sec - 
in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.107 sec - 
in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.775 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 159.223 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.439 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.129 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.251 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.488 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.4 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.995 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.164 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.97 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 207.121 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.436 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.882 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.629 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.956 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.301 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.785 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.811 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.139 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.556 sec - 
in org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.489 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tools.TestHdfsConfigFields
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.702 sec - in 
org.apache.hadoop.tools.TestHdfsConfigFields
Running org.apache.hadoop.tools.TestTools
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.659 sec - in 
org.apache.hadoop.tools.TestTools
Running org.apache.hadoop.tools.TestJMXGet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.518 sec - in 
org.apache.hadoop.tools.TestJMXGet
Running org.apache.hadoop.cli.TestErasureCodingCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.315 sec 

Hadoop-Hdfs-trunk - Build # 2454 - Still Failing

2015-10-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2454/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6929 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:22 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:54 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.065 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:58 h
[INFO] Finished at: 2015-10-20T09:30:58+00:00
[INFO] Final Memory: 55M/713M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-4270
Updating HDFS-9208
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
5 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.shutdown(MiniQJMHACluster.java:160)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpoint(TestRollingUpgrade.java:601)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN(TestRollingUpgrade.java:565)


FAILED:  
org.apache.hadoop.hdfs.TestRollingUpgrade.testDFSAdminRollingUpgradeCommands

Error Message:
expected null, but 

[jira] [Created] (HDFS-9268) JVM crashes when attempting to update a file in fuse file system using vim

2015-10-20 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9268:
-

 Summary: JVM crashes when attempting to update a file in fuse file 
system using vim
 Key: HDFS-9268
 URL: https://issues.apache.org/jira/browse/HDFS-9268
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


JVM crashes when users attempt to use vi to update a file on fuse file system 
with insufficient permission. (I use CDH's hadoop-fuse-dfs wrapper script to 
generate the bug, but the same bug is reproducible in trunk)

The root cause is a segfault in a pdfs-fuse method

To reproduce it do as follows:
mkdir /mnt/fuse
chmod 777 /mnt/fuse
ulimit -c unlimited# to enable coredump
hadoop-fuse-dfs -odebug hdfs://localhost:9000/fuse /mnt/fuse
touch /mnt/fuse/y
chmod 600 /mnt/fuse/y
vim /mnt/fuse/y
(in vim, :w to save the file)

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x003b82f27ad6, pid=26606, tid=140079005689600
#
# JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 1.7.0_79-b15)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# C  [libc.so.6+0x127ad6]  __tls_get_addr@@GLIBC_2.3+0x127ad6
#
# Core dump written. Default location: /home/weichiu/core or core.26606
#
# An error report file with more information is saved as:
# /home/weichiu/hs_err_pid26606.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
/usr/bin/hadoop-fuse-dfs: line 29: 26606 Aborted (core dumped) 
env CLASSPATH="${CLASSPATH}" ${HADOOP_HOME}/bin/fuse_dfs $@

===
The coredump shows the segfault comes from 
(gdb) bt
#0  0x003b82e328e5 in raise () from /lib64/libc.so.6
#1  0x003b82e340c5 in abort () from /lib64/libc.so.6
#2  0x7f66fc924d75 in os::abort(bool) () from 
/etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
#3  0x7f66fcaa76d7 in VMError::report_and_die() () from 
/etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
#4  0x7f66fc929c8f in JVM_handle_linux_signal () from 
/etc/alternatives/jre/jre/lib/amd64/server/libjvm.so
#5  
#6  0x003b82f27ad6 in __strcmp_sse42 () from /lib64/libc.so.6
#7  0x004039a0 in hdfsConnTree_RB_FIND ()
#8  0x00403e8f in fuseConnect ()
#9  0x004046db in dfs_chown ()
#10 0x7f66fcf8f6d2 in ?? () from /lib64/libfuse.so.2
#11 0x7f66fcf940d1 in ?? () from /lib64/libfuse.so.2
#12 0x7f66fcf910ef in ?? () from /lib64/libfuse.so.2
#13 0x003b83207851 in start_thread () from /lib64/libpthread.so.0
#14 0x003b82ee894d in clone () from /lib64/libc.so.6





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9269) Need to update the documentation and wrapper for hdfs-dfs

2015-10-20 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9269:
-

 Summary: Need to update the documentation and wrapper for hdfs-dfs
 Key: HDFS-9269
 URL: https://issues.apache.org/jira/browse/HDFS-9269
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


To reproduce the bug in HDFS-9268, I followed the wiki, the doc and read the 
wrapper script of hdfs-fuse, but found them super outdated. (the wrapper was 
last updated four years ago, and the hadoop project layout has dramatically 
changed since then) I am creating this JIRA to track the status of the update.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9270) TestShortCircuitLocalRead should not leave socket after unit test

2015-10-20 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HDFS-9270:
--

 Summary: TestShortCircuitLocalRead should not leave socket after 
unit test
 Key: HDFS-9270
 URL: https://issues.apache.org/jira/browse/HDFS-9270
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.7.1
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor


Unix domain sockets created by TestShortCircuitLocalRead and 
TestTracingShortCircuitLocalRead are not removed before finishing the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9272) Implement a unix-like cat utility

2015-10-20 Thread James Clampffer (JIRA)
James Clampffer created HDFS-9272:
-

 Summary: Implement a unix-like cat utility
 Key: HDFS-9272
 URL: https://issues.apache.org/jira/browse/HDFS-9272
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer
Priority: Minor


Implement the basic functionality of "cat" and have it build as a separate 
executable.

2 Reasons for this:
We don't have any real integration tests at the moment so something simple to 
verify that the library actually works against a real cluster is useful.

Eventually I'll make more utilities like stat, mkdir etc.  Once there are 
enough of them it will be simple to make a C++ implementation of the hadoop fs 
command line interface that doesn't take the latency hit of spinning up a JVM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Planning for Apache Hadoop 2.6.2

2015-10-20 Thread Sangjin Lee
Another friendly reminder that I'll be cutting the branch and creating the
RC soon. I'm targeting tomorrow. Thanks!

Sangjin

On Mon, Oct 12, 2015 at 11:40 AM, Sangjin Lee  wrote:

> Hi all,
>
> We are targeting next week to create the first RC for 2.6.2. Currently
> there are 14 issues that are committed to branch-2.6. If you have bug fixes
> that were made to branch-2 and/or branch-2.7 that are applicable to 2.6.x
> and would improve the quality of those releases, please take time to review
> them and commit them to branch-2.6.
>
> Also, if you have JIRAs that are targeted to 2.6.2 and are close to being
> done, this might be a good time to complete them.
>
> Do let me know if you have any questions.
>
> Thanks,
> Sangjin
>
> On Sat, Sep 26, 2015 at 9:19 AM, Sangjin Lee  wrote:
>
>> I have updated branch-2.6 and force pushed the branch. I checked the
>> branch out from scratch and tested building it.
>>
>> Branch-2.6 is back open. If you have a patch ready for 2.6.2, double
>> check if it applies cleanly before committing to branch-2.6.
>>
>> Thanks,
>> Sangjin
>>
>> On Fri, Sep 25, 2015 at 4:42 PM, Sangjin Lee  wrote:
>>
>>> Per Vinod's suggestion, in order to reduce the amount of movement I'll
>>> pick commits from branch-2.6 onto the tip of branch-2.6.1 rather than the
>>> other way around. This means I'll need to move branch-2.6 and force push
>>> that change.
>>>
>>> Could you please hold off on committing to branch-2.6 until I am done
>>> relocating the branch? I'll let you know when I'm done with that exercise.
>>> I expect I'll be done with this in 24 hours or so. Let me know if you have
>>> any concerns.
>>>
>>> Thanks,
>>> Sangjin
>>>
>>> On Fri, Sep 25, 2015 at 12:53 PM, Sangjin Lee  wrote:
>>>
 Thanks folks. I'll get started on the items Vinod mentioned soon.

 If you have something you'd like to push for inclusion in 2.6.2, please
 mark the target version as 2.6.2.

 I'd like to ask one more thing on top of that. It would be AWESOME if
 you can check if it can be applied cleanly on top of 2.6.1, and if not,
 provide an updated patch suitable for 2.6.1. This will help speed up the
 2.6.2 release process tremendously. If the person who's recommending the
 JIRA for 2.6.2 inclusion can do it, that would be great. Help from the
 original contributor might be helpful as well. Thanks for your cooperation!

 Regards,
 Sangjin

 On Thu, Sep 24, 2015 at 9:39 PM, Akira AJISAKA <
 ajisa...@oss.nttdata.co.jp> wrote:

> Thanks Vinod and Sangjin for releasing 2.6.1 and starting discussion
> for 2.6.2!
>
> +1. If there's anything I can help you with, please tell me.
>
> Thanks,
> Akira
>
>
> On 9/25/15 13:23, Vinayakumar B wrote:
>
>> Thanks Vinod and Sangjin for making 2.6.1 release possible.
>>
>> Apologies for not getting time to verify and vote for the release.
>>
>> I will also be available to help for 2.6.2 if anything required.
>>
>> Thanks,
>> Vinay
>>
>> On Fri, Sep 25, 2015 at 12:16 AM, Vinod Vavilapalli <
>> vino...@hortonworks.com
>>
>>> wrote:
>>>
>>
>> +1. Please take it over, I’ll standby for any help needed.
>>>
>>> Thanks
>>> +Vinod
>>>
>>>
>>> On Sep 24, 2015, at 11:34 AM, Sangjin Lee > sj...@apache.org>> wrote:
>>>
>>> I'd like to volunteer as the release manager for 2.6.2 unless there
>>> is an
>>> objection.
>>>
>>>
>>>
>>
>

>>>
>>
>


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #518

2015-10-20 Thread Apache Jenkins Server
See 

Changes:

[lei] HDFS-9251. Refactor TestWriteToReplica and TestFsDatasetImpl to avoid

--
[...truncated 7298 lines...]
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.972 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.845 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.779 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.434 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.087 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.884 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.662 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.686 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.048 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.247 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.091 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.738 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 15.737 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.462 sec - in 
org.apache.hadoop.fs.TestUrlStreamHandlerFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.463 sec - in 
org.apache.hadoop.fs.permission.TestStickyBit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.704 sec - in 

[jira] [Created] (HDFS-9271) Implement basic NN operations

2015-10-20 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-9271:


 Summary: Implement basic NN operations
 Key: HDFS-9271
 URL: https://issues.apache.org/jira/browse/HDFS-9271
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


Expose via C and C++ API:
* mkdirs
* rename
* delete
* stat
* chmod
* chown
* getListing




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Planning for Apache Hadoop 2.6.2

2015-10-20 Thread Sangjin Lee
If you have backported bugfixes to 2.7.2, please take a moment to consider
if it is relevant (and important) for 2.6.x too, and if so backport it to
branch-2.6. Thanks!

On Tue, Oct 20, 2015 at 9:33 AM, Sangjin Lee  wrote:

> Another friendly reminder that I'll be cutting the branch and creating the
> RC soon. I'm targeting tomorrow. Thanks!
>
> Sangjin
>
> On Mon, Oct 12, 2015 at 11:40 AM, Sangjin Lee  wrote:
>
>> Hi all,
>>
>> We are targeting next week to create the first RC for 2.6.2. Currently
>> there are 14 issues that are committed to branch-2.6. If you have bug fixes
>> that were made to branch-2 and/or branch-2.7 that are applicable to 2.6.x
>> and would improve the quality of those releases, please take time to review
>> them and commit them to branch-2.6.
>>
>> Also, if you have JIRAs that are targeted to 2.6.2 and are close to being
>> done, this might be a good time to complete them.
>>
>> Do let me know if you have any questions.
>>
>> Thanks,
>> Sangjin
>>
>> On Sat, Sep 26, 2015 at 9:19 AM, Sangjin Lee  wrote:
>>
>>> I have updated branch-2.6 and force pushed the branch. I checked the
>>> branch out from scratch and tested building it.
>>>
>>> Branch-2.6 is back open. If you have a patch ready for 2.6.2, double
>>> check if it applies cleanly before committing to branch-2.6.
>>>
>>> Thanks,
>>> Sangjin
>>>
>>> On Fri, Sep 25, 2015 at 4:42 PM, Sangjin Lee  wrote:
>>>
 Per Vinod's suggestion, in order to reduce the amount of movement I'll
 pick commits from branch-2.6 onto the tip of branch-2.6.1 rather than the
 other way around. This means I'll need to move branch-2.6 and force push
 that change.

 Could you please hold off on committing to branch-2.6 until I am done
 relocating the branch? I'll let you know when I'm done with that exercise.
 I expect I'll be done with this in 24 hours or so. Let me know if you have
 any concerns.

 Thanks,
 Sangjin

 On Fri, Sep 25, 2015 at 12:53 PM, Sangjin Lee  wrote:

> Thanks folks. I'll get started on the items Vinod mentioned soon.
>
> If you have something you'd like to push for inclusion in 2.6.2,
> please mark the target version as 2.6.2.
>
> I'd like to ask one more thing on top of that. It would be AWESOME if
> you can check if it can be applied cleanly on top of 2.6.1, and if not,
> provide an updated patch suitable for 2.6.1. This will help speed up the
> 2.6.2 release process tremendously. If the person who's recommending the
> JIRA for 2.6.2 inclusion can do it, that would be great. Help from the
> original contributor might be helpful as well. Thanks for your 
> cooperation!
>
> Regards,
> Sangjin
>
> On Thu, Sep 24, 2015 at 9:39 PM, Akira AJISAKA <
> ajisa...@oss.nttdata.co.jp> wrote:
>
>> Thanks Vinod and Sangjin for releasing 2.6.1 and starting discussion
>> for 2.6.2!
>>
>> +1. If there's anything I can help you with, please tell me.
>>
>> Thanks,
>> Akira
>>
>>
>> On 9/25/15 13:23, Vinayakumar B wrote:
>>
>>> Thanks Vinod and Sangjin for making 2.6.1 release possible.
>>>
>>> Apologies for not getting time to verify and vote for the release.
>>>
>>> I will also be available to help for 2.6.2 if anything required.
>>>
>>> Thanks,
>>> Vinay
>>>
>>> On Fri, Sep 25, 2015 at 12:16 AM, Vinod Vavilapalli <
>>> vino...@hortonworks.com
>>>
 wrote:

>>>
>>> +1. Please take it over, I’ll standby for any help needed.

 Thanks
 +Vinod


 On Sep 24, 2015, at 11:34 AM, Sangjin Lee >> sj...@apache.org>> wrote:

 I'd like to volunteer as the release manager for 2.6.2 unless there
 is an
 objection.



>>>
>>
>

>>>
>>
>


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #519

2015-10-20 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9270. TestShortCircuitLocalRead should not leave socket after 
unit

[kihwal] HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently.

[wang] HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen.

--
[...truncated 6584 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.958 sec - in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.043 sec - in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.442 sec - in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.568 sec - in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.642 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.909 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestSimulatedFSDataset
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.591 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBPOfferService
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.987 sec - in 
org.apache.hadoop.hdfs.server.datanode.web.dtp.TestDtpHttp2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestParameterParser
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.548 sec - in 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestParameterParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestDataNodeUGIProvider
Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 6.344 sec <<< 
FAILURE! - in 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestDataNodeUGIProvider
testUGICacheSecure(org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestDataNodeUGIProvider)
  Time elapsed: 0.65 sec  <<< ERROR!
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.token.Token.buildCacheKey()Ljava/lang/String;
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.buildTokenCacheKey(DataNodeUGIProvider.java:108)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugi(DataNodeUGIProvider.java:71)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestDataNodeUGIProvider.testUGICacheSecure(TestDataNodeUGIProvider.java:100)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.76 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMXBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.677 sec - 
in 

Hadoop-Hdfs-trunk-Java8 - Build # 519 - Still Failing

2015-10-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/519/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6777 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:37 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:55 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.099 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:59 h
[INFO] Finished at: 2015-10-20T23:43:40+00:00
[INFO] Final Memory: 67M/895M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter8650908893523533157.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire2672042118210135190tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1658277798554867594101tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-9270
Updating HDFS-3059
Updating HADOOP-12418
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
6 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestDataNodeUGIProvider.testUGICacheSecure

Error Message:
org.apache.hadoop.security.token.Token.buildCacheKey()Ljava/lang/String;

Stack Trace:
java.lang.NoSuchMethodError: 
org.apache.hadoop.security.token.Token.buildCacheKey()Ljava/lang/String;
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.buildTokenCacheKey(DataNodeUGIProvider.java:108)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugi(DataNodeUGIProvider.java:71)
at 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.TestDataNodeUGIProvider.testUGICacheSecure(TestDataNodeUGIProvider.java:100)


FAILED:  

[jira] [Created] (HDFS-9273) ACLs on root directory may be lost after NN restart

2015-10-20 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-9273:
---

 Summary: ACLs on root directory may be lost after NN restart
 Key: HDFS-9273
 URL: https://issues.apache.org/jira/browse/HDFS-9273
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Xiao Chen
Assignee: Xiao Chen


After restarting namenode, the ACLs on the root directory ("/") may be lost if 
it's rolled over to fsimage.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2456

2015-10-20 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9270. TestShortCircuitLocalRead should not leave socket after 
unit

[kihwal] HADOOP-12418. TestRPC.testRPCInterruptedSimple fails intermittently.

[wang] HDFS-3059. ssl-server.xml causes NullPointer. Contributed by Xiao Chen.

--
[...truncated 6689 lines...]
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.997 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.23 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.25 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.553 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.773 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.693 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.249 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.527 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.483 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.956 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.053 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.202 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.069 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.313 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.244 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.611 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.964 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.14 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.575 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.839 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.179 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 147.492 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.114 sec - 
in org.apache.hadoop.hdfs.TestBlockReaderLocal
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.078 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, 

[jira] [Created] (HDFS-9274) Default value of dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent

2015-10-20 Thread Yi Liu (JIRA)
Yi Liu created HDFS-9274:


 Summary: Default value of 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec should be consistent
 Key: HDFS-9274
 URL: https://issues.apache.org/jira/browse/HDFS-9274
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Trivial


Always see following error log while running:
{noformat}
ERROR datanode.DirectoryScanner (DirectoryScanner.java:(430)) - 
dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 
ms/sec. Assuming default value of 1000
{noformat}

{code}

  dfs.datanode.directoryscan.throttle.limit.ms.per.sec
  0
...
{code}
The default value should be 1000 and consistent with 
DFS_DATANODE_DIRECTORYSCAN_THROTTLE_LIMIT_MS_PER_SEC_DEFAULT



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2456 - Still Failing

2015-10-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2456/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6882 lines...]
main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:03 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:17 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.062 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:21 h
[INFO] Finished at: 2015-10-21T01:42:52+00:00
[INFO] Final Memory: 64M/715M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-9270
Updating HDFS-3059
Updating HADOOP-12418
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeDataBlocks1

Error Message:
Failed to recover striped block: -9223372036854775790

Stack Trace:
java.lang.AssertionError: Failed to recover striped block: -9223372036854775790
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.sortTargetsByReplicas(TestRecoverStripedFile.java:345)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.assertFileBlocksRecovery(TestRecoverStripedFile.java:290)
at 
org.apache.hadoop.hdfs.TestRecoverStripedFile.testRecoverThreeDataBlocks1(TestRecoverStripedFile.java:143)




Hadoop-Hdfs-trunk-Java8 - Build # 520 - Still Failing

2015-10-20 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/520/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7448 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:34 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:10 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.059 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:14 h
[INFO] Finished at: 2015-10-21T04:49:46+00:00
[INFO] Final Memory: 55M/506M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7325267174830451455.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire4339962506354653085tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_4928843842037098622256tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-3985
Updating MAPREDUCE-6495
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed