[jira] [Resolved] (HDFS-8902) Uses ByteBuffer on heap or direct ByteBuffer according to used erasure coder in striping read (position and stateful)

2015-12-04 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HDFS-8902.
-
Resolution: Duplicate

> Uses ByteBuffer on heap or direct ByteBuffer according to used erasure coder 
> in striping read (position and stateful)
> -
>
> Key: HDFS-8902
> URL: https://issues.apache.org/jira/browse/HDFS-8902
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> We would choose ByteBuffer on heap or direct ByteBuffer according to used 
> erasure coder in striping read (position and stateful), for performance 
> consideration. Pure Java implemented coder favors on heap one, though native 
> coder likes more direct one, avoiding data copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8904) Uses ByteBuffer on heap or direct ByteBuffer according to used erasure coder in striping recovery on DataNode side

2015-12-04 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HDFS-8904.
-
Resolution: Duplicate

> Uses ByteBuffer on heap or direct ByteBuffer according to used erasure coder 
> in striping recovery on DataNode side
> --
>
> Key: HDFS-8904
> URL: https://issues.apache.org/jira/browse/HDFS-8904
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> We would choose ByteBuffer on heap or direct ByteBuffer according to used 
> erasure coder in striping recovery in DataNode side like the work to do in 
> client side, for performance consideration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8903) Uses ByteBuffer on heap or direct ByteBuffer according to used erasure coder in striping write

2015-12-04 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HDFS-8903.
-
Resolution: Duplicate

> Uses ByteBuffer on heap or direct ByteBuffer according to used erasure coder 
> in striping write
> --
>
> Key: HDFS-8903
> URL: https://issues.apache.org/jira/browse/HDFS-8903
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>
> We would choose ByteBuffer on heap or direct ByteBuffer according to used 
> erasure coder in striping write, for performance consideration. Pure Java 
> implemented coder favors on heap one, though native coder likes more direct 
> one, avoiding data copy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9127) Re-replication for files with enough replicas in single rack

2015-12-04 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HDFS-9127.
-
Resolution: Invalid

This problem doesnt exist on latest code.
Feel free to re-open if found again.

> Re-replication for files with enough replicas in single rack
> 
>
> Key: HDFS-9127
> URL: https://issues.apache.org/jira/browse/HDFS-9127
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>
> Found while debugging testcases in HDFS-8647
>  *Scenario:* 
> ===
> Start a cluster with Single rack with three DN's
> write a file with RF=3
> adde two Nodes with different racks
> As per blockplacement policy ([Rack 
> Awareness|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/RackAwareness.html])
>  atleast one replica needs to replicate to newly added rack.But it is not 
> happening..Because of following reason.
> {color:blue}
> when cluster was single rack,block will be removed from 
> {{neededReplications}} after 3 replicas.
> later, after adding new rack, only replications will happen which are present 
> in {{neededReplications}}
> So for the blocks which already have enough replicas, new rack replications 
> will not take place..
> {color}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Hdfs-trunk #2600

2015-12-04 Thread Apache Jenkins Server
See 



[jira] [Resolved] (HDFS-4488) Confusing WebHDFS exception when host doesn't resolve

2015-12-04 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDFS-4488.
---
  Resolution: Cannot Reproduce
Target Version/s: 2.1.0-beta, 3.0.0  (was: 3.0.0, 2.1.0-beta)

{code}
$hadoop fs -ls webhdfs://unresolvable-host/
15/12/04 11:48:33 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
-ls: java.net.UnknownHostException: unresolvable-host
...
$echo $?
255
{code}
The message is already fixed.  Resolving as Cannot Reproduce.

> Confusing WebHDFS exception when host doesn't resolve
> -
>
> Key: HDFS-4488
> URL: https://issues.apache.org/jira/browse/HDFS-4488
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 0.23.0
>Reporter: Daryn Sharp
>
> {noformat}
> $ hadoop fs -ls webhdfs://unresolvable-host/
> ls: unresolvable-host
> $ echo $?
> 1
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2593) Rename webhdfs HTTP param 'delegation' to 'delegationtoken'

2015-12-04 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HDFS-2593.
---
Resolution: Not A Problem

Resolving this stale issue as Not A Problem.

> Rename webhdfs HTTP param 'delegation' to 'delegationtoken'
> ---
>
> Key: HDFS-2593
> URL: https://issues.apache.org/jira/browse/HDFS-2593
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Affects Versions: 0.23.1, 1.0.0, 2.0.0-alpha
>Reporter: Alejandro Abdelnur
>
> to be consistent with other params names and to be more clear for users on 
> what it is.
> webhdfs spec doc should be updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9508) Fix NPE in MiniKMS.start()

2015-12-04 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-9508.
---
Resolution: Invalid

This should be filed under Hadoop Commons

> Fix NPE in MiniKMS.start()
> --
>
> Key: HDFS-9508
> URL: https://issues.apache.org/jira/browse/HDFS-9508
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: test
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
>
> Sometimes, KMS resource file can not be loaded. When this happens, an 
> InputStream variable will be a null pointer which will subsequently throw NPE.
> This is a supportability JIRA that makes the error message more explicit, and 
> explain why NPE is thrown. Ultimately, leads us to understand why the 
> resource files can not be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2601

2015-12-04 Thread Apache Jenkins Server
See 

Changes:

[mingma] HDFS-9430 Remove waitForLoadingFSImage since checkNNStartup has ensured

[lei] HDFS-9490. MiniDFSCluster should change block generation stamp via

[xyao] HDFS-8831. Trash Support for deletion in HDFS encryption zone.

--
[...truncated 6201 lines...]
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.099 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.694 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.348 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.389 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.535 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.897 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.54 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 10.698 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.176 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.965 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.851 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.809 sec - in 
org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.268 sec - in 
org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.703 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGBUS (0x7) at pc=0x7f7048fdb8a0, pid=20738, tid=140119899072256
#
# JRE version: Java(TM) SE Runtime Environment (7.0_55-b13) (build 1.7.0_55-b13)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.55-b03 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# C  [libc.so.6+0x1518a0]  __nss_hosts_lookup+0x1a620
#
# Failed to write core dump. Core dumps have been disabled. To enable core 
dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# 

#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Aborted
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.357 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.518 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.787 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec - in 

Hadoop-Hdfs-trunk-Java8 - Build # 665 - Failure

2015-12-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/665/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6350 lines...]
at hudson.FilePath.exists(FilePath.java:1435)
at hudson.tools.JDKInstaller.performInstallation(JDKInstaller.java:127)
at 
hudson.tools.InstallerTranslator.getToolHome(InstallerTranslator.java:68)
at 
hudson.tools.ToolLocationNodeProperty.getToolHome(ToolLocationNodeProperty.java:107)
at hudson.tools.ToolInstallation.translateFor(ToolInstallation.java:205)
at hudson.model.JDK.forNode(JDK.java:130)
at hudson.model.AbstractProject.getEnvironment(AbstractProject.java:355)
at hudson.model.Run.getEnvironment(Run.java:2228)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:932)
at hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:215)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:74)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:776)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:723)
at hudson.model.Build$BuildExecution.post2(Build.java:183)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:670)
at hudson.model.Run.execute(Run.java:1763)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.remoting.ChannelClosedException: channel is already closed
at hudson.remoting.Channel.send(Channel.java:575)
at hudson.remoting.Request.call(Request.java:130)
at hudson.remoting.Channel.call(Channel.java:777)
at hudson.FilePath.act(FilePath.java:978)
... 21 more
Caused by: java.io.IOException: Unexpected termination of the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:40)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
ERROR: Publisher 'Publish JUnit test result report' failed: no workspace for 
Hadoop-Hdfs-trunk-Java8 #665
ERROR: Build step failed with exception
java.lang.NullPointerException
at 
hudson.plugins.violations.ViolationsPublisher.perform(ViolationsPublisher.java:74)
at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:776)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:723)
at hudson.model.Build$BuildExecution.post2(Build.java:183)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:670)
at hudson.model.Run.execute(Run.java:1763)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Build step 'Report Violations' marked build as failure
Updating HDFS-9430
Updating HDFS-9490
Updating HDFS-8831
ERROR: Publisher 'E-mail Notification' failed: no workspace for 
Hadoop-Hdfs-trunk-Java8 #665
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: H2 is offline; cannot locate jdk-1.8.0
ERROR: H2 is offline; cannot locate jdk-1.8.0




###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Created] (HDFS-9508) Fix NPE in MiniKMS.start()

2015-12-04 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9508:
-

 Summary: Fix NPE in MiniKMS.start()
 Key: HDFS-9508
 URL: https://issues.apache.org/jira/browse/HDFS-9508
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


Sometimes, KMS resource file can not be loaded. When this happens, an 
InputStream variable will be a null pointer which will subsequently throw NPE.

This is a supportability JIRA that makes the error message more explicit, and 
xplain why NPE is thrown. Ultimately, leads us to understand why the resource 
files can not be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2601 - Failure

2015-12-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2601/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6394 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:14 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:24 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.090 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:29 h
[INFO] Finished at: 2015-12-04T22:44:10+00:00
[INFO] Final Memory: 55M/688M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: 
org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in 
starting fork, check output in log -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek.org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek

Error Message:
org/apache/hadoop/fs/LocalFileSystem

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/fs/LocalFileSystem
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at 
org.apache.hadoop.fs.contract.hdfs.HDFSContract.createCluster(HDFSContract.java:52)
at 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek.createCluster(TestHDFSContractSeek.java:36)


FAILED:  
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.org.apache.hadoop.hdfs.TestAppendSnapshotTruncate

Error Message:
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #666

2015-12-04 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9267. TestDiskError should get stored replicas through

[yzhang] HDFS-9474. TestPipelinesFailover should not fail when printing debug

--
[...truncated 5626 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.082 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.421 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.146 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNode
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.qjournal.server.TestJournal
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.827 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournal
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestModTime
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.026 sec - in 
org.apache.hadoop.hdfs.TestModTime
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.496 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.523 sec - in 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.108 sec - in 
org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.235 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationToken
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.203 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.117 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.77 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.153 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.412 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.476 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 10, Failures: 0, Errors: 0, 

Hadoop-Hdfs-trunk-Java8 - Build # 666 - Still Failing

2015-12-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/666/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5819 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [07:27 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:45 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.118 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:52 h
[INFO] Finished at: 2015-12-05T00:30:09+00:00
[INFO] Final Memory: 64M/1050M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-9474
Updating HDFS-9267
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

[jira] [Created] (HDFS-9509) Add new metrics for measuring datanode storage statistics

2015-12-04 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-9509:
-

 Summary: Add new metrics for measuring datanode storage statistics
 Key: HDFS-9509
 URL: https://issues.apache.org/jira/browse/HDFS-9509
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Tsz Wo Nicholas Sze


We already have sendDataPacketBlockedOnNetworkNanos and 
sendDataPacketTransferNanos for the transferTo case.  We should add more 
metrics for the other cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9506) Move invalidate blocks to ReplicationManager

2015-12-04 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HDFS-9506.
-
Resolution: Duplicate

As we keep updating the patch for [HDFS-9442], this issue is addressed there. 
Closing this jira as _Duplicate_.

> Move invalidate blocks to ReplicationManager
> 
>
> Key: HDFS-9506
> URL: https://issues.apache.org/jira/browse/HDFS-9506
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: namenode
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>
> The [HDFS-9442] moves basic replication mechanism from {{BlockManager}} to 
> newly added {{ReplicationManager}}. After that we can move more replication 
> related logic to {{ReplicationManager}} as well, e.g. _invalidate blocks_ and 
> _corrupt replicas_. The goal is, again, cleaner code logic, well-organized 
> source files, and easier lock separating work in future.
> This jira is to track the effort of moving {{InvalidateBlocks}} to 
> {{ReplicationManager}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2602 - Still Failing

2015-12-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2602/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6422 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:55 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:42 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.137 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:46 h
[INFO] Finished at: 2015-12-05T03:08:12+00:00
[INFO] Final Memory: 56M/584M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN

Error Message:
Test resulted in an unexpected exit

Stack Trace:
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1895)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
at 
org.apache.hadoop.hdfs.qjournal.MiniQJMHACluster.shutdown(MiniQJMHACluster.java:161)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpoint(TestRollingUpgrade.java:602)
at 
org.apache.hadoop.hdfs.TestRollingUpgrade.testCheckpointWithMultipleNN(TestRollingUpgrade.java:566)


FAILED:  
org.apache.hadoop.hdfs.TestRollingUpgrade.testDFSAdminRollingUpgradeCommands

Error Message:
expected null, but 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2602

2015-12-04 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9267. TestDiskError should get stored replicas through

[yzhang] HDFS-9474. TestPipelinesFailover should not fail when printing debug

[arp] HDFS-9214. Support reconfiguring

--
[...truncated 6229 lines...]
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.115 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.195 sec - in 
org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.382 sec - in 
org.apache.hadoop.hdfs.TestDFSMkdirs
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.049 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.093 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.994 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.695 sec - 
in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.13 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.462 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.12 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.149 sec - in 
org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.705 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.974 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 67.839 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.152 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.807 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.938 sec - in 
org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.928 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.568 sec - in 
org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.327 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.536 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.317 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.742 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.518 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.508 

[jira] [Created] (HDFS-9510) FsVolume should add the operation of creating file's time metrics

2015-12-04 Thread Lin Yiqun (JIRA)
Lin Yiqun created HDFS-9510:
---

 Summary: FsVolume should add the operation of creating file's time 
metrics
 Key: HDFS-9510
 URL: https://issues.apache.org/jira/browse/HDFS-9510
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: benchmarks, fs
Affects Versions: 2.7.1
Reporter: Lin Yiqun
Assignee: Lin Yiqun


For one datanode, this datanode may be have not only one data directorys. And 
each dataDir has correspond to a FsVolums. And in some time, the one of these 
dataDirs being created files or dirs slowly because of hardware problems. 
What's more, it will influence the whole node. So we need to monitor these 
slow-writing disk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2603 - Still Failing

2015-12-04 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2603/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6726 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:52 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:53 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.109 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:57 h
[INFO] Finished at: 2015-12-05T07:35:57+00:00
[INFO] Final Memory: 56M/558M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
14 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting.testLeaseExpiration

Error Message:
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator
at 
org.apache.hadoop.util.IntrusiveCollection.iterator(IntrusiveCollection.java:213)
at 
org.apache.hadoop.util.IntrusiveCollection.clear(IntrusiveCollection.java:368)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.clearPendingCachingCommands(DatanodeManager.java:1577)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1202)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1548)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stopCommonServices(NameNode.java:773)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:952)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1913)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting.testLeaseExpiration(TestBlockReportRateLimiting.java:214)


FAILED:  

Build failed in Jenkins: Hadoop-Hdfs-trunk #2603

2015-12-04 Thread Apache Jenkins Server
See 

Changes:

[arp] HDFS-9214. Add missing license header

[lei] HDFS-9491. Tests should get the number of pending async deletes via

--
[...truncated 6533 lines...]
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.635 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.219 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 155.553 sec - 
in org.apache.hadoop.hdfs.TestWriteReadStripedFile
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.036 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.542 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.322 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050
Running org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.216 sec - in 
org.apache.hadoop.hdfs.TestDFSShellGenericOptions
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.707 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.157 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.951 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStream
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.579 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.18 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.136 sec - in 
org.apache.hadoop.hdfs.TestFileAppend4
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.748 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.671 sec - in 
org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.328 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.273 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.586 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.489 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.534 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.892 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.992 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.796 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.354 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.058 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.994 

[jira] [Created] (HDFS-9507) LeaseRenewer Logging Under-Reporting

2015-12-04 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-9507:
-

 Summary: LeaseRenewer Logging Under-Reporting
 Key: HDFS-9507
 URL: https://issues.apache.org/jira/browse/HDFS-9507
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.7.1
Reporter: BELUGA BEHR
Priority: Minor


Why is it that in LeaseRenewer#run() failures to renew a lease on a file are 
reported with "warn" level logging, but in LeaseRenewer#renew() it is reported 
with a "debug" level warn?

In LeaseRenewer#renew(), if the method renewLease() returns 'false' then the 
problem is silently discarded (continue, no Exception is thrown) and the next 
client in the list tries to renew.

{code:title=LeaseRenewer.java|borderStyle=solid}
private void run(final int id) throws InterruptedException {
  ...
  try {
renew();
lastRenewed = Time.monotonicNow();
  } catch (SocketTimeoutException ie) {
LOG.warn("Failed to renew lease for " + clientsString() + " for "
+ (elapsed/1000) + " seconds.  Aborting ...", ie);
synchronized (this) {
  while (!dfsclients.isEmpty()) {
DFSClient dfsClient = dfsclients.get(0);
dfsClient.closeAllFilesBeingWritten(true);
closeClient(dfsClient);
  }
  //Expire the current LeaseRenewer thread.
  emptyTime = 0;
}
break;
  } catch (IOException ie) {
LOG.warn("Failed to renew lease for " + clientsString() + " for "
  + (elapsed/1000) + " seconds.  Will retry shortly ...", ie);
  }
}
...
}


private void renew() throws IOException {
{
   ...
if (!c.renewLease()) {
  LOG.debug("Did not renew lease for client {}", c);
  continue;
}
   ...
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)