Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Wangda Tan
+1 (binding)

- Build & deploy single-node Hadoop from source code
- Add/Remove node labels to queues/nodes
- Run distributed shell commanding using default/specified node labels

Thanks,
Wangda


On Mon, Dec 21, 2015 at 9:58 AM, Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> +1(non-binding)
>
> - verified mds and signature of source and binary tarball
> - started 3 node cluster and ran example jobs such as wordcount and
> terasort
> - built from source tarball with -Pnative on CentOS 7 and OpenJDK 7
> - built site documentation and skimmed the contents
>
> Thanks,
> Masatake Iwasaki
>
>
>
> On 12/17/15 11:49, Vinod Kumar Vavilapalli wrote:
>
>> Hi all,
>>
>> I've created a release candidate RC1 for Apache Hadoop 2.7.2.
>>
>> As discussed before, this is the next maintenance release to follow up
>> 2.7.1.
>>
>> The RC is available for validation at:
>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ <
>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/>
>>
>> The RC tag in git is: release-2.7.2-RC1
>>
>> The maven artifacts are available via repository.apache.org <
>> http://repository.apache.org/> at
>> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
>> > >
>>
>> The release-notes are inside the tar-balls at location
>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
>> hosted this at
>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html for
>> quick perusal.
>>
>> As you may have noted,
>>   - The RC0 related voting thread got halted due to some critical issues.
>> It took a while again for getting all those blockers out of the way. See
>> the previous voting thread [3] for details.
>>   - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by
>> quite a bit. This release's related discussion threads are linked below:
>> [1] and [2].
>>
>> Please try the release and vote; the vote will run for the usual 5 days.
>>
>> Thanks,
>> Vinod
>>
>> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes <
>> http://markmail.org/message/oozq3gvd4nhzsaes>
>> [2]: Planning Apache Hadoop 2.7.2
>> http://markmail.org/message/iktqss2qdeykgpqk <
>> http://markmail.org/message/iktqss2qdeykgpqk>
>> [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0:
>> http://markmail.org/message/5txhvr2qdiqglrwc
>>
>>
>>
>


[jira] [Created] (HDFS-9587) Modify tests that initialize MiniDFSNNTopology with hard-coded ports to use ServerSocketUtil#getPorts

2015-12-21 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-9587:
---

 Summary: Modify tests that initialize MiniDFSNNTopology with 
hard-coded ports to use ServerSocketUtil#getPorts
 Key: HDFS-9587
 URL: https://issues.apache.org/jira/browse/HDFS-9587
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiao Chen
Assignee: Xiao Chen


In tests with HA we have to {{setHttpPort}} to non-ephemeral ports when 
creating {{MiniDFSNNTopology}}. But hard-coding it may result in failures if 
the hard-coded port is in use in the env.
We should use {{ServerSocketUtil#getPorts}} to reduce the probability of such 
failures. This jira is to propose to update all 
{{MiniDFSNNTopology#setHttpPort}} usages.  (Currently there's only 
{{ServerSocketUtil#getPort}}, but HDFS-9444 is adding the ability to get 
multiple free ports)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9588) DiskBalancer : Add submitDiskbalancer RPC

2015-12-21 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-9588:
--

 Summary: DiskBalancer : Add submitDiskbalancer RPC
 Key: HDFS-9588
 URL: https://issues.apache.org/jira/browse/HDFS-9588
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: HDFS-1312
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: HDFS-1312


Add a data node RPC that allows client to submit a diskbalancer plan to data 
node.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2656 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2656/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: Test reports were 
found but none of them are new. Did tests run? 
For example, 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports/TEST-org.apache.hadoop.fs.TestUrlStreamHandlerFactory.xml
 is 20 hr old

Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2656

2015-12-21 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H9 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #718

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] Move HDFS-8914 from 2.8.0 to 2.7.3 in CHANGES.txt.

--
[...truncated 6798 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.045 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.405 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.481 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.674 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.902 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.89 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.306 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.887 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.556 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRecoveryStriped
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.819 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecoveryStriped
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBlockReaderFactory
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.239 sec - in 
org.apache.hadoop.hdfs.TestBlockReaderFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.8 sec - in 
org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.767 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.169 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.server.TestJournal
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.804 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournal
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.615 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.945 sec - in 

Hadoop-Hdfs-trunk-Java8 - Build # 718 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/718/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6991 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:54 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:52 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.217 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:56 h
[INFO] Finished at: 2015-12-21T20:56:55+00:00
[INFO] Final Memory: 56M/513M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend(TestDataNodeMetrics.java:289)




Jenkins build is back to normal : Hadoop-Hdfs-trunk #2657

2015-12-21 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-9590) NPE in Storage#unlock

2015-12-21 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-9590:
---

 Summary: NPE in Storage#unlock
 Key: HDFS-9590
 URL: https://issues.apache.org/jira/browse/HDFS-9590
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen


The code looks to be possible to have race conditions in multiple-threaded runs.
{code}
public void unlock() throws IOException {
  if (this.lock == null)
return;
  this.lock.release();
  lock.channel().close();
  lock = null;
}
{code}
This is called in a handful of places, and I don't see any protection. Shall we 
add some synchronization mechanism? Not sure if I missed any design assumptions 
here.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9591) FSImage.loadEdits threw NullPointerException

2015-12-21 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9591:
-

 Summary: FSImage.loadEdits threw NullPointerException
 Key: HDFS-9591
 URL: https://issues.apache.org/jira/browse/HDFS-9591
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs, ha, namenode
Affects Versions: 3.0.0
 Environment: Jenkins
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


https://builds.apache.org/job/PreCommit-HDFS-Build/13963/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestFailureToReadEdits/testCheckpointStartingMidEditsFile_0_/

{noformat}
Error Message

Expected non-empty 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-3/current/fsimage_005

Stacktrace

java.lang.AssertionError: Expected non-empty 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/name-0-3/current/fsimage_005
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.namenode.FSImageTestUtil.assertNNHasCheckpoints(FSImageTestUtil.java:470)
at 
org.apache.hadoop.hdfs.server.namenode.ha.HATestUtil.waitForCheckpoint(HATestUtil.java:235)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits.testCheckpointStartingMidEditsFile(TestFailureToReadEdits.java:240)

{noformat}

{noformat}
Exception in thread "Edit log tailer" 
org.apache.hadoop.util.ExitUtil$ExitException: java.lang.NullPointerException
at com.google.common.base.Joiner.join(Joiner.java:226)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:818)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:812)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:257)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:371)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:444)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)

at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:385)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:324)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:341)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:444)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:337)

{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9589) Block files which have been hardlinked should be duplicated before the DataNode appends to the them

2015-12-21 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-9589:
--

 Summary: Block files which have been hardlinked should be 
duplicated before the DataNode appends to the them
 Key: HDFS-9589
 URL: https://issues.apache.org/jira/browse/HDFS-9589
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Block files which have been hardlinked should be duplicated before the DataNode 
appends to the them.  The patch for HDFS-8860 accidentally removed this code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #719

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-4454. NM to nodelabel mapping going wrong after RM restart. (Bibin

[wangda] YARN-4290. Add -showDetails option to YARN Nodes CLI to print all nodes

--
[...truncated 7511 lines...]
at java.util.TimerThread.mainLoop(Timer.java:552)
at java.util.TimerThread.run(Timer.java:505)
"nioEventLoopGroup-2-15"  prio=10 tid=358 runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:744)
"pool-2-thread-2"  prio=5 tid=49 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"AsyncAppender-Dispatcher-Thread-91" daemon prio=5 tid=169 in Object.wait()
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.apache.log4j.AsyncAppender$Dispatcher.run(AsyncAppender.java:548)
at java.lang.Thread.run(Thread.java:744)
"IPC Parameter Sending Thread #1" daemon prio=5 tid=122 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
"nioEventLoopGroup-2-28"  prio=10 tid=371 runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:744)
"nioEventLoopGroup-2-3"  prio=10 tid=346 runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:621)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:309)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703)
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at 

Hadoop-Hdfs-trunk-Java8 - Build # 719 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/719/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7704 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [06:38 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:41 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.198 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:47 h
[INFO] Finished at: 2015-12-21T23:23:16+00:00
[INFO] Final Memory: 72M/1236M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage

Error Message:
Cannot obtain block length for 
LocatedBlock{BP-2078839909-67.195.81.148-1450737639000:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:35067,DS-5fc29a41-8d89-4a00-aaed-04073251324d,DISK]]}

Stack Trace:
java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-2078839909-67.195.81.148-1450737639000:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:35067,DS-5fc29a41-8d89-4a00-aaed-04073251324d,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:276)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
at 

Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Xuan Gong
+1 (binding)

Thanks

Xuan Gong

> On Dec 21, 2015, at 4:25 PM, Chen He  wrote:
> 
> +1 (non-binding)
> 
> deploy to a 2-node cluster
> run loadgen and wc without problem
> 
> 
> Kindly ask a silly question, why we separate 2.6.x from 2.7.x?
> 
> On Mon, Dec 21, 2015 at 11:51 AM, Wangda Tan  wrote:
> 
>> +1 (binding)
>> 
>> - Build & deploy single-node Hadoop from source code
>> - Add/Remove node labels to queues/nodes
>> - Run distributed shell commanding using default/specified node labels
>> 
>> Thanks,
>> Wangda
>> 
>> 
>> On Mon, Dec 21, 2015 at 9:58 AM, Masatake Iwasaki <
>> iwasak...@oss.nttdata.co.jp> wrote:
>> 
>>> +1(non-binding)
>>> 
>>> - verified mds and signature of source and binary tarball
>>> - started 3 node cluster and ran example jobs such as wordcount and
>>> terasort
>>> - built from source tarball with -Pnative on CentOS 7 and OpenJDK 7
>>> - built site documentation and skimmed the contents
>>> 
>>> Thanks,
>>> Masatake Iwasaki
>>> 
>>> 
>>> 
>>> On 12/17/15 11:49, Vinod Kumar Vavilapalli wrote:
>>> 
 Hi all,
 
 I've created a release candidate RC1 for Apache Hadoop 2.7.2.
 
 As discussed before, this is the next maintenance release to follow up
 2.7.1.
 
 The RC is available for validation at:
 http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ <
 http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/>
 
 The RC tag in git is: release-2.7.2-RC1
 
 The maven artifacts are available via repository.apache.org <
 http://repository.apache.org/> at
 
>> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
 <
>> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
> 
 
 The release-notes are inside the tar-balls at location
 hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
 hosted this at
 http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html
>> for
 quick perusal.
 
 As you may have noted,
  - The RC0 related voting thread got halted due to some critical
>> issues.
 It took a while again for getting all those blockers out of the way. See
 the previous voting thread [3] for details.
  - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by
 quite a bit. This release's related discussion threads are linked below:
 [1] and [2].
 
 Please try the release and vote; the vote will run for the usual 5 days.
 
 Thanks,
 Vinod
 
 [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes <
 http://markmail.org/message/oozq3gvd4nhzsaes>
 [2]: Planning Apache Hadoop 2.7.2
 http://markmail.org/message/iktqss2qdeykgpqk <
 http://markmail.org/message/iktqss2qdeykgpqk>
 [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0:
 http://markmail.org/message/5txhvr2qdiqglrwc
 
 
 
>>> 
>> 



Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Masatake Iwasaki

+1(non-binding)

- verified mds and signature of source and binary tarball
- started 3 node cluster and ran example jobs such as wordcount and terasort
- built from source tarball with -Pnative on CentOS 7 and OpenJDK 7
- built site documentation and skimmed the contents

Thanks,
Masatake Iwasaki


On 12/17/15 11:49, Vinod Kumar Vavilapalli wrote:

Hi all,

I've created a release candidate RC1 for Apache Hadoop 2.7.2.

As discussed before, this is the next maintenance release to follow up 2.7.1.

The RC is available for validation at: 
http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ 


The RC tag in git is: release-2.7.2-RC1

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1026/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
for quick perusal.

As you may have noted,
  - The RC0 related voting thread got halted due to some critical issues. It 
took a while again for getting all those blockers out of the way. See the 
previous voting thread [3] for details.
  - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
bit. This release's related discussion threads are linked below: [1] and [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 

[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 

[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
http://markmail.org/message/5txhvr2qdiqglrwc






Hadoop-Hdfs-trunk - Build # 2652 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2652/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: Test reports were 
found but none of them are new. Did tests run? 
For example, 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports/TEST-org.apache.hadoop.fs.TestUrlStreamHandlerFactory.xml
 is 16 hr old

Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2652

2015-12-21 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H9 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at 

Re: Highlighting communication on releases

2015-12-21 Thread Junping Du
Thanks Karthik to bring up the topic. Comparing with small enhancement on email 
distribution for release/branch news, I think a more important thing is to make 
our community (contributor & committer) to be more aware on ongoing 
releases/branches.

I can see two improvements could be helpful here:
  - Beside email thread, we can have a central place (like wiki page: 
HowToContribute/HowToCommit) to keep updating the commit instructions/details 
for each releasing/unreleased branches by release manager.
   Following is a quick example:
   /***
   prior to 2.6: closed to commit
   ...
   2.6.4: commit to trunk, branch-2 and branch-2.6
   2.7.2: commit to trunk, branch-2, branch-2.7 and branch-2.7.2 (assume RC
hasn't been out which is not true today.)
   2.8: commit to trunk, branch-2 and branch-2.8
   2.9: commit to trunk and branch-2
   ...
   ***/
   Both contributor and committer can keep an eye on it.

  - Add a tools in post-commit to check if fix version is match with commits
landing on related branches. If not, send notifications to patch contributor, 
committer (and
may be release manager also) to correct it asap.

Thoughts?

Thanks,

Junping


From: Karthik Kambatla 
Sent: Monday, December 21, 2015 4:59 AM
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Highlighting communication on releases

Hi folks

In the last few months, we have been shipping multiple releases -
maintenance and minor - elevating the quality and purpose of our releases.

With the increase in releases and related communication, I wonder if we
need to highlight release-related communication in some way. Otherwise, it
is easy to miss the details in the plethora of emails on dev lists. For
instance, it appears committers might have missed the detail that
branch-2.8 was cut.

A couple of options I see:

   1. Tag release-related emails with [RELEASE] or some other appropriate
   tag.
   2. Separate mailing list for release specific information. May be, this
   list could be used for other project-level topics as well. common-dev@
   has been overloaded for both common-specific and project-level discussions.
   Also, folks wouldn't have to email all -dev@lists.

Would like to hear others' thoughts on the matter.

Karthik


[jira] [Created] (HDFS-9586) listCorruptFileBlocks should not output files that all replications are decommissioning

2015-12-21 Thread Phil Yang (JIRA)
Phil Yang created HDFS-9586:
---

 Summary: listCorruptFileBlocks should not output files that all 
replications are decommissioning
 Key: HDFS-9586
 URL: https://issues.apache.org/jira/browse/HDFS-9586
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Phil Yang
Assignee: Phil Yang


As HDFS-7933 said, we should count decommissioning and decommissioned nodes 
respectively and regard decommissioning nodes as special live nodes whose file 
is not corrupt or missing.

So in listCorruptFileBlocks which is used by fsck and HDFS namenode website, we 
should collect a corrupt file only if liveReplicas and decommissioning are both 
0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2651 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2651/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: Test reports were 
found but none of them are new. Did tests run? 
For example, 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports/TEST-org.apache.hadoop.fs.TestUrlStreamHandlerFactory.xml
 is 15 hr old

Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2651

2015-12-21 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H9 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #720

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] YARN-3458. CPU resource monitoring in Windows. Contributed by Inigo

--
[...truncated 7381 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.657 sec - in 
org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.891 sec - 
in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.329 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.659 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.342 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.41 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.301 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.156 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.445 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.997 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.448 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.12 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.149 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.987 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.343 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.318 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Hadoop-Hdfs-trunk-Java8 - Build # 720 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/720/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7574 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [06:57 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:17 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.066 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:24 h
[INFO] Finished at: 2015-12-22T06:15:32+00:00
[INFO] Final Memory: 56M/610M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
6 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testVolumeIteratorWithCaching

Error Message:
test timed out after 6 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 6 milliseconds
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:378)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1496)
at org.apache.hadoop.ipc.Client.call(Client.java:1424)
at org.apache.hadoop.ipc.Client.call(Client.java:1385)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy19.getBlockLocations(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getBlockLocations(ClientNamenodeProtocolTranslatorPB.java:253)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:255)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy20.getBlockLocations(Unknown Source)
at 

RE: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Rohith Sharma K S
+1(non-binding)
-Built from source and deployed in 3 node cluster
-Tested with combination of RMRestart/RM HA/RM work preserving restart/NM work 
preserving restart modes.
-Ran MR and distributed shell sample applications.
-Verified signature and md5 of binary

Thanks & Regards
Rohith Sharma K S


-Original Message-
From: Tsuyoshi Ozawa [mailto:oz...@apache.org] 
Sent: 22 December 2015 09:04
To: common-...@hadoop.apache.org
Cc: yarn-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; Vinod Kumar Vavilapalli
Subject: Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

+1

- downloaded src and bin tar balls and verified signatures.
- built Tez and Spark with 2.7.2 artifacts and JDK7.
- ran tests of Tez with 2.7.2 artifacts, it passed.

FYI: YARN-4348, reported by Jian, is one of critical issues of 2.7.2 
release.It's better to release 2.7.3 as soon as possible after this release.

Thanks,
- Tsuyoshi

On Tue, Dec 22, 2015 at 4:51 AM, Wangda Tan  wrote:
> +1 (binding)
>
> - Build & deploy single-node Hadoop from source code
> - Add/Remove node labels to queues/nodes
> - Run distributed shell commanding using default/specified node labels
>
> Thanks,
> Wangda
>
>
> On Mon, Dec 21, 2015 at 9:58 AM, Masatake Iwasaki < 
> iwasak...@oss.nttdata.co.jp> wrote:
>
>> +1(non-binding)
>>
>> - verified mds and signature of source and binary tarball
>> - started 3 node cluster and ran example jobs such as wordcount and 
>> terasort
>> - built from source tarball with -Pnative on CentOS 7 and OpenJDK 7
>> - built site documentation and skimmed the contents
>>
>> Thanks,
>> Masatake Iwasaki
>>
>>
>>
>> On 12/17/15 11:49, Vinod Kumar Vavilapalli wrote:
>>
>>> Hi all,
>>>
>>> I've created a release candidate RC1 for Apache Hadoop 2.7.2.
>>>
>>> As discussed before, this is the next maintenance release to follow 
>>> up 2.7.1.
>>>
>>> The RC is available for validation at:
>>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ < 
>>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/>
>>>
>>> The RC tag in git is: release-2.7.2-RC1
>>>
>>> The maven artifacts are available via repository.apache.org < 
>>> http://repository.apache.org/> at 
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1
>>> 026/ 
>>> >> 1026/
>>> >
>>>
>>> The release-notes are inside the tar-balls at location 
>>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. 
>>> I hosted this at 
>>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
>>> for quick perusal.
>>>
>>> As you may have noted,
>>>   - The RC0 related voting thread got halted due to some critical issues.
>>> It took a while again for getting all those blockers out of the way. 
>>> See the previous voting thread [3] for details.
>>>   - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip 
>>> by quite a bit. This release's related discussion threads are linked below:
>>> [1] and [2].
>>>
>>> Please try the release and vote; the vote will run for the usual 5 days.
>>>
>>> Thanks,
>>> Vinod
>>>
>>> [1]: 2.7.2 release plan: 
>>> http://markmail.org/message/oozq3gvd4nhzsaes < 
>>> http://markmail.org/message/oozq3gvd4nhzsaes>
>>> [2]: Planning Apache Hadoop 2.7.2
>>> http://markmail.org/message/iktqss2qdeykgpqk < 
>>> http://markmail.org/message/iktqss2qdeykgpqk>
>>> [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0:
>>> http://markmail.org/message/5txhvr2qdiqglrwc
>>>
>>>
>>>
>>


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Arpit Agarwal
Thanks for putting together this release Vinod.

+1 (binding)

 - Verified signatures
 - Started pseudo-cluster with binary distribution, verified git commit ID
 - Built sources, deployed pseudo-cluster and ran example map reduce jobs, 
DistributedShell, HDFS commands.


PS: Extra file hadoop-2.7.2-RC1-src.tar.gz.md5?





On 12/16/15, 6:49 PM, "Vinod Kumar Vavilapalli"  wrote:

>Hi all,
>
>I've created a release candidate RC1 for Apache Hadoop 2.7.2.
>
>As discussed before, this is the next maintenance release to follow up 2.7.1.
>
>The RC is available for validation at: 
>http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ 
>
>
>The RC tag in git is: release-2.7.2-RC1
>
>The maven artifacts are available via repository.apache.org 
> at 
>https://repository.apache.org/content/repositories/orgapachehadoop-1026/ 
>
>
>The release-notes are inside the tar-balls at location 
>hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
>this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
>for quick perusal.
>
>As you may have noted,
> - The RC0 related voting thread got halted due to some critical issues. It 
> took a while again for getting all those blockers out of the way. See the 
> previous voting thread [3] for details.
> - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
> bit. This release's related discussion threads are linked below: [1] and [2].
>
>Please try the release and vote; the vote will run for the usual 5 days.
>
>Thanks,
>Vinod
>
>[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 
>
>[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 
>
>[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
>http://markmail.org/message/5txhvr2qdiqglrwc
>


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Tsuyoshi Ozawa
+1

- downloaded src and bin tar balls and verified signatures.
- built Tez and Spark with 2.7.2 artifacts and JDK7.
- ran tests of Tez with 2.7.2 artifacts, it passed.

FYI: YARN-4348, reported by Jian, is one of critical issues of 2.7.2
release.It's better to release 2.7.3 as soon as possible after this
release.

Thanks,
- Tsuyoshi

On Tue, Dec 22, 2015 at 4:51 AM, Wangda Tan  wrote:
> +1 (binding)
>
> - Build & deploy single-node Hadoop from source code
> - Add/Remove node labels to queues/nodes
> - Run distributed shell commanding using default/specified node labels
>
> Thanks,
> Wangda
>
>
> On Mon, Dec 21, 2015 at 9:58 AM, Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
>> +1(non-binding)
>>
>> - verified mds and signature of source and binary tarball
>> - started 3 node cluster and ran example jobs such as wordcount and
>> terasort
>> - built from source tarball with -Pnative on CentOS 7 and OpenJDK 7
>> - built site documentation and skimmed the contents
>>
>> Thanks,
>> Masatake Iwasaki
>>
>>
>>
>> On 12/17/15 11:49, Vinod Kumar Vavilapalli wrote:
>>
>>> Hi all,
>>>
>>> I've created a release candidate RC1 for Apache Hadoop 2.7.2.
>>>
>>> As discussed before, this is the next maintenance release to follow up
>>> 2.7.1.
>>>
>>> The RC is available for validation at:
>>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ <
>>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/>
>>>
>>> The RC tag in git is: release-2.7.2-RC1
>>>
>>> The maven artifacts are available via repository.apache.org <
>>> http://repository.apache.org/> at
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1026/
>>> >> >
>>>
>>> The release-notes are inside the tar-balls at location
>>> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
>>> hosted this at
>>> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html for
>>> quick perusal.
>>>
>>> As you may have noted,
>>>   - The RC0 related voting thread got halted due to some critical issues.
>>> It took a while again for getting all those blockers out of the way. See
>>> the previous voting thread [3] for details.
>>>   - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by
>>> quite a bit. This release's related discussion threads are linked below:
>>> [1] and [2].
>>>
>>> Please try the release and vote; the vote will run for the usual 5 days.
>>>
>>> Thanks,
>>> Vinod
>>>
>>> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes <
>>> http://markmail.org/message/oozq3gvd4nhzsaes>
>>> [2]: Planning Apache Hadoop 2.7.2
>>> http://markmail.org/message/iktqss2qdeykgpqk <
>>> http://markmail.org/message/iktqss2qdeykgpqk>
>>> [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0:
>>> http://markmail.org/message/5txhvr2qdiqglrwc
>>>
>>>
>>>
>>


Build failed in Jenkins: Hadoop-Hdfs-trunk #2658

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] YARN-3458. CPU resource monitoring in Windows. Contributed by Inigo

--
[...truncated 6198 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.333 sec - in 
org.apache.hadoop.hdfs.TestSeekBug
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.832 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.472 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.263 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.892 sec - in 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.214 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.846 sec - in 
org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.359 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.728 sec - 
in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.442 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.825 sec - in 
org.apache.hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.9 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.67 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestAclsEndToEnd
Tests run: 9, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 119.344 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestAclsEndToEnd
testGoodWithKeyAcls(org.apache.hadoop.hdfs.TestAclsEndToEnd)  Time elapsed: 
61.287 sec  <<< FAILURE!
java.lang.AssertionError: Exception during creation of key key1 by keyadmin
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.TestAclsEndToEnd.doFullAclTest(TestAclsEndToEnd.java:414)
at 
org.apache.hadoop.hdfs.TestAclsEndToEnd.testGoodWithKeyAcls(TestAclsEndToEnd.java:343)

Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.311 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.651 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.016 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Running org.apache.hadoop.hdfs.TestRecoverStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 76.79 sec - in 
org.apache.hadoop.hdfs.TestRecoverStripedFile
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.962 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestPread
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.517 sec - in 
org.apache.hadoop.hdfs.TestPread
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.046 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.167 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.98 sec - in 

Hadoop-Hdfs-trunk - Build # 2658 - Failure

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2658/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6391 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:54 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:14 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.075 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:18 h
[INFO] Finished at: 2015-12-22T05:08:56+00:00
[INFO] Final Memory: 56M/710M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestAclsEndToEnd.testGoodWithKeyAcls

Error Message:
Exception during creation of key key1 by keyadmin

Stack Trace:
java.lang.AssertionError: Exception during creation of key key1 by keyadmin
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.TestAclsEndToEnd.doFullAclTest(TestAclsEndToEnd.java:414)
at 
org.apache.hadoop.hdfs.TestAclsEndToEnd.testGoodWithKeyAcls(TestAclsEndToEnd.java:343)


FAILED:  org.apache.hadoop.hdfs.TestDataTransferKeepalive.testSlowReader

Error Message:
Could not obtain block: 
BP-1418118350-67.195.81.146-1450750105974:blk_1073741825_1001 file=/test

Stack Trace:
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: 
BP-1418118350-67.195.81.146-1450750105974:blk_1073741825_1001 file=/test
at 
org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:986)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:612)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:889)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:941)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:714)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 

[jira] [Created] (HDFS-9584) NPE in distcp when ssl configuration file does not exist in class path.

2015-12-21 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-9584:


 Summary: NPE in distcp when ssl configuration file does not exist 
in class path.
 Key: HDFS-9584
 URL: https://issues.apache.org/jira/browse/HDFS-9584
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Affects Versions: 2.7.1
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


{noformat}./hadoop distcp -mapredSslConf ssl-distcp.xml 
hftp://x.x.x.x:25003/history hdfs://x.x.x.X:25008/history{noformat}

if {{ssl-distcp.xml}} file not exist in class path, distcp will throw 
NullPointerException.

{code}
java.lang.NullPointerException
at org.apache.hadoop.tools.DistCp.setupSSLConfig(DistCp.java:266)
at org.apache.hadoop.tools.DistCp.createJob(DistCp.java:250)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:175)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:154)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:127)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:431)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Akira AJISAKA

+1 (binding)

- Downloaded source tarball
- Verified signatures and checksums
- Compiled and built a single node cluster
- Confirmed commons-collections is upgraded to 3.2.2
- Compiled Hive 1.2.1 and Tez 0.7.0/0.8.1-alpha using Hadoop 2.7.2 pom 
successfully

- Ran some Hive on Tez/MRv2 queries successfully

On 12/19/15 02:24, Jason Lowe wrote:

+1 (binding)
- Verified signatures and digests- Spot checked CHANGES.txt files- Successfully 
performed a native build from source- Deployed to a single node cluster and ran 
sample jobs
We have been running with the fix for YARN-4354 on two of our clusters for some 
time with no issues, so I feel confident that prior blocker is now fixed.
Jason


   From: Vinod Kumar Vavilapalli 
  To: Hadoop Common ; hdfs-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Cc: Vinod Kumar Vavilapalli 
  Sent: Wednesday, December 16, 2015 8:49 PM
  Subject: [VOTE] Release Apache Hadoop 2.7.2 RC1

Hi all,

I've created a release candidate RC1 for Apache Hadoop 2.7.2.

As discussed before, this is the next maintenance release to follow up 2.7.1.

The RC is available for validation at: 
http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ 


The RC tag in git is: release-2.7.2-RC1

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1026/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
for quick perusal.

As you may have noted,
  - The RC0 related voting thread got halted due to some critical issues. It 
took a while again for getting all those blockers out of the way. See the 
previous voting thread [3] for details.
  - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
bit. This release's related discussion threads are linked below: [1] and [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 

[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 

[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
http://markmail.org/message/5txhvr2qdiqglrwc








Hadoop-Hdfs-trunk - Build # 2650 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2650/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: Test reports were 
found but none of them are new. Did tests run? 
For example, 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports/TEST-org.apache.hadoop.fs.TestUrlStreamHandlerFactory.xml
 is 14 hr old

Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2650

2015-12-21 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H9 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at 

[jira] [Created] (HDFS-9585) Erasure Coding: Wrong limit setting of target ByteBuffer

2015-12-21 Thread Kai Sasaki (JIRA)
Kai Sasaki created HDFS-9585:


 Summary: Erasure Coding: Wrong limit setting of target ByteBuffer
 Key: HDFS-9585
 URL: https://issues.apache.org/jira/browse/HDFS-9585
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Sasaki
Assignee: Kai Sasaki


In ErasureCodingWorker, target ByteBuffer should be limit by desired recovered 
size. But due to wrong indexing, an output ByteBuffer does not be limited 
correctly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 717 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/717/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10111 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:40 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:01 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.060 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:06 h
[INFO] Finished at: 2015-12-21T16:55:38+00:00
[INFO] Final Memory: 56M/676M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode

Error Message:
test timed out after 10 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 10 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher.waitForMoveCompletion(Dispatcher.java:1122)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchBlockMoves(Dispatcher.java:1096)
at 
org.apache.hadoop.hdfs.server.balancer.Dispatcher.dispatchAndCheckContinue(Dispatcher.java:1060)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.runOneIteration(Balancer.java:612)
at 
org.apache.hadoop.hdfs.server.balancer.Balancer.run(Balancer.java:666)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:1008)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-0
  target length: 50
  current item: 37
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
 directory: /test-0
 

Hadoop-Hdfs-trunk - Build # 2654 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2654/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: Test reports were 
found but none of them are new. Did tests run? 
For example, 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports/TEST-org.apache.hadoop.fs.TestUrlStreamHandlerFactory.xml
 is 18 hr old

Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2654

2015-12-21 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H9 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #717

2015-12-21 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HDFS-9505. HDFS Architecture documentation needs to be refreshed.

--
[...truncated 9918 lines...]
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.385 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.229 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.348 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.385 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.867 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 44.188 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.943 sec - in 
org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.561 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.06 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.631 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.17 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.272 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 68.31 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.994 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFsShellPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.825 sec - in 
org.apache.hadoop.hdfs.TestFsShellPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.267 sec - in 
org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.282 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Java HotSpot(TM) 64-Bit 

Hadoop-Hdfs-trunk - Build # 2653 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2653/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: Test reports were 
found but none of them are new. Did tests run? 
For example, 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports/TEST-org.apache.hadoop.fs.TestUrlStreamHandlerFactory.xml
 is 17 hr old

Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2653

2015-12-21 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H9 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at 

Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-21 Thread Kihwal Lee
+1 (binding)- Verified the signature/digest.
- I've built the dist tree with the native support from the source.- Brought up 
a single node cluster and ran a set of basic tests.

 

  From: Vinod Kumar Vavilapalli 
 To: Hadoop Common ; hdfs-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org 
Cc: Vinod Kumar Vavilapalli 
 Sent: Wednesday, December 16, 2015 8:49 PM
 Subject: [VOTE] Release Apache Hadoop 2.7.2 RC1
   
Hi all,

I've created a release candidate RC1 for Apache Hadoop 2.7.2.

As discussed before, this is the next maintenance release to follow up 2.7.1.

The RC is available for validation at: 
http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ 


The RC tag in git is: release-2.7.2-RC1

The maven artifacts are available via repository.apache.org 
 at 
https://repository.apache.org/content/repositories/orgapachehadoop-1026/ 


The release-notes are inside the tar-balls at location 
hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I hosted 
this at http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html 
for quick perusal.

As you may have noted,
 - The RC0 related voting thread got halted due to some critical issues. It 
took a while again for getting all those blockers out of the way. See the 
previous voting thread [3] for details.
 - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by quite a 
bit. This release's related discussion threads are linked below: [1] and [2].

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

[1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 

[2]: Planning Apache Hadoop 2.7.2 http://markmail.org/message/iktqss2qdeykgpqk 

[3]: [VOTE] Release Apache Hadoop 2.7.2 RC0: 
http://markmail.org/message/5txhvr2qdiqglrwc


   

Hadoop-Hdfs-trunk - Build # 2655 - Still Failing

2015-12-21 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2655/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 110 lines...]
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: Test reports were 
found but none of them are new. Did tests run? 
For example, 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/target/surefire-reports/TEST-org.apache.hadoop.fs.TestUrlStreamHandlerFactory.xml
 is 19 hr old

Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2655

2015-12-21 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H9 (Mapreduce Falcon Hadoop Pig Zookeeper Tez Hdfs) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:531)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://git-wip-us.apache.org/repos/asf/hadoop.git; returned 
status code 4:
stdout: 
stderr: error: failed to write new configuration file .git/config.lock

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1640)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1616)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1612)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1254)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1266)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:972)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor354.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:608)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:583)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:542)
at hudson.remoting.UserRequest.perform(UserRequest.java:121)
at hudson.remoting.UserRequest.perform(UserRequest.java:49)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to H9(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1413)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:221)
at hudson.remoting.Channel.call(Channel.java:778)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:250)
at com.sun.proxy.$Proxy131.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:298)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:751)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/hadoop.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/hadoop.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:763)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1012)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1043)
at hudson.scm.SCM.checkout(SCM.java:484)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1274)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:609)
at