[jira] [Resolved] (HDFS-2031) request HDFS test-patch to support coordinated change in COMMON jar, for post-patch build only

2015-05-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-2031.

Resolution: Fixed

This has effectively been fixed.

> request HDFS test-patch to support coordinated change in COMMON jar, for 
> post-patch build only
> --
>
> Key: HDFS-2031
> URL: https://issues.apache.org/jira/browse/HDFS-2031
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: build
>Reporter: Matt Foley
>
> For dev test, need to test an HDFS patch that is dependent on a modified 
> COMMON jar.
> For casual testing, one can build in COMMON with "ant mvn-install", then 
> build in HDFS with "ant -Dresolvers=internal", and the modified COMMON jar 
> from the local maven cache (~/.m2/) will be used in the HDFS build.  This 
> works fine.
> However, running test-patch locally should build:
> * pre-patch: build unmodified HDFS with reference to generic Apache COMMON 
> jar (because the modified COMMON jar may be incompatible with the unmodified 
> HDFS)
> * post-patch:  build modified HDFS with reference to custom local COMMON jar
> Currently, each developer has their favorite way to hack build.xml to make 
> this work.  It would be nice if an ant build switch was available for this 
> use case.  It seems to me the easiest way to accomodate it would be to make 
> "-Dresolvers=internal" be effective only for the post-patch build of 
> test-patch, and let the pre-patch build use the generic Apache jar.
> Of course the same thing applies to MAPREDUCE test-patch when dependent on 
> modified COMMON and/or HDFS jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8312) Trash does not descent into child directories to check for permissions

2015-05-01 Thread Eric Yang (JIRA)
Eric Yang created HDFS-8312:
---

 Summary: Trash does not descent into child directories to check 
for permissions
 Key: HDFS-8312
 URL: https://issues.apache.org/jira/browse/HDFS-8312
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS, security
Affects Versions: 2.6.0, 2.2.0
Reporter: Eric Yang


HDFS trash does not descent into child directory to check if user has 
permission to delete files.  For example:

Run the following command to initialize directory structure as super user:
{code}
hadoop fs -mkdir /BSS/level1
hadoop fs -mkdir /BSS/level1/level2
hadoop fs -mkdir /BSS/level1/level2/level3
hadoop fs -put /tmp/appConfig.json /BSS/level1/level2/level3/testfile.txt
hadoop fs -chown user1:users /BSS/level1/level2/level3/testfile.txt
hadoop fs -chown -R user1:users /BSS/level1
hadoop fs -chown -R 750 /BSS/level1
hadoop fs -chmod -R 640 /BSS/level1/level2/level3/testfile.txt
hadoop fs -chmod 775 /BSS
{code}

Change to a normal user called user2. 

When trash is enabled:
{code}
sudo su user2 -
hadoop fs -rm -r /BSS/level1
15/05/01 16:51:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 3600 minutes, Emptier interval = 0 minutes.
Moved: 'hdfs://bdvs323.svl.ibm.com:9000/BSS/level1' to trash at: 
hdfs://bdvs323.svl.ibm.com:9000/user/user2/.Trash/Current
{code}

When trash is disabled:
{code}
/opt/ibm/biginsights/IHC/bin/hadoop fs -Dfs.trash.interval=0 -rm -r /BSS/level1
15/05/01 16:58:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: 
Deletion interval = 0 minutes, Emptier interval = 0 minutes.
rm: Permission denied: user=user2, access=ALL, 
inode="/BSS/level1":user1:users:drwxr-x---
{code}

There is inconsistency between trash behavior and delete behavior.  When trash 
is enabled, files owned by user1 is deleted by user2.  It looks like trash does 
not recursively validate if the child directory files can be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8311) DataStreamer.transfer() should timeout the socket InputStream.

2015-05-01 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HDFS-8311:
---

 Summary: DataStreamer.transfer() should timeout the socket 
InputStream.
 Key: HDFS-8311
 URL: https://issues.apache.org/jira/browse/HDFS-8311
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Esteban Gutierrez


While validating some HA failure modes we found that HDFS clients can take a 
long time to recover or sometimes don't recover at all since we don't setup the 
socket timeout in the InputStream:

{code}
private void transfer () { ...
...
 OutputStream unbufOut = NetUtils.getOutputStream(sock, writeTimeout);
 InputStream unbufIn = NetUtils.getInputStream(sock);
...
}
{code}

The InputStream should have its own timeout in the same way as the OutputStream.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8310) Fix TestCLI.testAll "help: help for find" on Windows

2015-05-01 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8310:


 Summary: Fix TestCLI.testAll "help: help for find" on Windows
 Key: HDFS-8310
 URL: https://issues.apache.org/jira/browse/HDFS-8310
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Priority: Minor


The test uses  in a single regex, which does not 
match on Windows as shown below.

{code}
2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(155)) - 
---
2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(156)) - Test ID: [31]
2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(157)) -Test Description: [help: 
help for find]
2015-04-30 01:14:01,737 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(158)) - 
2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(162)) -   Test Commands: [-help 
find]
2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(166)) - 
2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(173)) - 
2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(177)) -  Comparator: 
[RegexpAcrossOutputComparator]
2015-04-30 01:14:01,738 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(179)) -  Comparision result:   [fail]
2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(181)) - Expected output:   
[-find  \.\.\.  \.\.\. :
  Finds all files that match the specified expression and
  applies selected actions to them\. If no  is specified
  then defaults to the current working directory\. If no
  expression is specified then defaults to -print\.
  
  The following primary expressions are recognised:
-name pattern
-iname pattern
  Evaluates as true if the basename of the file matches the
  pattern using standard file system globbing\.
  If -iname is used then the match is case insensitive\.
  
-print
-print0
  Always evaluates to true. Causes the current pathname to be
  written to standard output followed by a newline. If the -print0
  expression is used then an ASCII NULL character is appended rather
  than a newline.
  
  The following operators are recognised:
expression -a expression
expression -and expression
expression expression
  Logical AND operator for joining two expressions\. Returns
  true if both child expressions return true\. Implied by the
  juxtaposition of two expressions and so does not need to be
  explicitly specified\. The second expression will not be
  applied if the first fails\.
]
2015-04-30 01:14:01,739 INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(183)) -   Actual output:   
[-find  ...  ... :
  Finds all files that match the specified expression and
  applies selected actions to them. If no  is specified
  then defaults to the current working directory. If no
  expression is specified then defaults to -print.
  
  The following primary expressions are recognised:
-name pattern
-iname pattern
  Evaluates as true if the basename of the file matches the
  pattern using standard file system globbing.
  If -iname is used then the match is case insensitive.
  
-print
-print0
  Always evaluates to true. Causes the current pathname to be
  written to standard output followed by a newline. If the -print0
  expression is used then an ASCII NULL character is appended rather
  than a newline.
  
  The following operators are recognised:
expression -a expression
expression -and expression
expression expression
  Logical AND operator for joining two expressions. Returns
  true if both child expressions return true. Implied by the
  juxtaposition of two expressions and so does not need to be
  explicitly specified. The second expression will not be
  applied if the first fails.
]
{code} 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8309) Skip unit test using DataNodeTestUtils#injectDataDirFailure() on Windows

2015-05-01 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-8309:


 Summary: Skip unit test using 
DataNodeTestUtils#injectDataDirFailure() on Windows
 Key: HDFS-8309
 URL: https://issues.apache.org/jira/browse/HDFS-8309
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: 2.7.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor


As [~cnauroth] noted  in HDFS-7917 below, 
DataNodeTestUtils.injectDataDirFailure() won't work for Windows as rename will 
fail due to open handles on data node dir. This ticket is opened to skip these 
tests for Windows. 

bq.Unfortunately, I just remembered that the rename isn't going to work on 
Windows. It typically doesn't allow you to rename a directory where there are 
open file handles anywhere in the sub-tree. We'd have to shutdown the DataNode 
before doing the rename and then start it up. By doing that, we'd be changing 
the meaning of the test from covering an online failure to covering a failure 
at DataNode startup, so I don't think we can make that change.

Below are the two test cases that need to be fixed:
# TestDataNodeVolumeFailure#testFailedVolumeBeingRemovedFromDataNode
# TestDataNodeHotSwapVolumes.testDirectlyReloadAfterCheckDiskError




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7442) Optimization for decommission-in-progress check

2015-05-01 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma resolved HDFS-7442.
---
Resolution: Duplicate

HDFS-7411 has addressed this issue.

> Optimization for decommission-in-progress check
> ---
>
> Key: HDFS-7442
> URL: https://issues.apache.org/jira/browse/HDFS-7442
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 2.6.0
>Reporter: Ming Ma
>
> 1. {{isReplicationInProgress }} currently rescan all blocks of a given node 
> each time the method is called; it becomes less efficient as more of its 
> blocks become fully replicated. Each scan takes FS lock.
> 2. As discussed in HDFS-7374, if the node becomes dead during decommission, 
> it is useful if the dead node can be marked as decommissioned after all its 
> blocks are fully replicated. Currently there is no way to check the blocks of 
> dead decomm-in-progress nodes, given the dead node is removed from blockmap.
> There are mitigations for these limitations. Set 
> dfs.namenode.decommission.nodes.per.interval to small value for reduce the 
> duration of lock. HDFS-7409 uses global FS state to tell if a dead node's 
> blocks are fully replicated.
> To address these scenarios, it will be useful to track the 
> decommon-in-progress blocks separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #171

2015-05-01 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-3564. Fix 
TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable fails 
randomly. (Jian He via wangda)

[zjshen] YARN-3544. Got back AM logs link on the RM web UI for a completed app. 
Contributed by Xuan Gong.

[wheat9] HDFS-8200. Refactor FSDirStatAndListingOp. Contributed by Haohui Mai.

[Arun Suresh] HADOOP-11891. OsSecureRandom should lazily fill its reservoir 
(asuresh)

[aw] HADOOP-11866. increase readability and reliability of checkstyle, 
shellcheck, and whitespace reports (aw)

[wang] HDFS-8292. Move conditional in fmt_time from dfs-dust.js to status.html. 
Contributed by Charles Lamb.

[jing9] HDFS-8300. Fix unit test failures and findbugs warning caused by 
HDFS-8283. Contributed by Jing Zhao.

[vinodkv] YARN-2619. Added NodeManager support for disk io isolation through 
cgroups. Contributed by Varun Vasudev and Wei Yan.

--
[...truncated 4992 lines...]
[INFO] 
+ cd hadoop-hdfs-project
+ /home/jenkins/tools/maven/latest/bin/mvn clean verify checkstyle:checkstyle 
findbugs:findbugs -Drequire.test.libhadoop -Pdist -Pnative -Dtar -Pdocs -fae
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Apache Hadoop HDFS Client
[INFO] Apache Hadoop HDFS
[INFO] Apache Hadoop HttpFS
[INFO] Apache Hadoop HDFS BookKeeper Journal
[INFO] Apache Hadoop HDFS-NFS
[INFO] Apache Hadoop HDFS Project
[INFO] 
[INFO] Using the builder 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder
 with a thread count of 1
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Client 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-client ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-client 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-hdfs-client ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hadoop-hdfs-client ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 86 source files to 

[WARNING] 
:
 

 uses or overrides a deprecated API.
[WARNING] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-hdfs-client ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-hdfs-client ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-hdfs-client 
---
[INFO] No tests to run.
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs-client ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ 
hadoop-hdfs-client ---
[WARNING] JAR will be empty - no content was marked

Hadoop-Hdfs-trunk-Java8 - Build # 171 - Still Failing

2015-05-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/171/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5185 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . FAILURE [ 35.866 s]
[INFO] Apache Hadoop HDFS  SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.115 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 37.051 s
[INFO] Finished at: 2015-05-01T11:40:48+00:00
[INFO] Final Memory: 50M/156M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1:checkstyle 
(default-cli) on project hadoop-hdfs-client: An error has occurred in 
Checkstyle report generation. Failed during checkstyle execution: Unable to 
find configuration file at location: 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/checkstyle.xml'.
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #146
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 669798 bytes
Compression is 0.0%
Took 25 sec
Recording test results
Updating YARN-2619
Updating HDFS-8300
Updating HDFS-8292
Updating HADOOP-11891
Updating YARN-3544
Updating HADOOP-11866
Updating HDFS-8283
Updating YARN-3564
Updating HDFS-8200
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Hadoop-Hdfs-trunk - Build # 2112 - Still Failing

2015-05-01 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2112/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5185 lines...]
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . FAILURE [ 26.065 s]
[INFO] Apache Hadoop HDFS  SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.122 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 27.614 s
[INFO] Finished at: 2015-05-01T11:34:58+00:00
[INFO] Final Memory: 55M/723M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.12.1:checkstyle 
(default-cli) on project hadoop-hdfs-client: An error has occurred in 
Checkstyle report generation. Failed during checkstyle execution: Unable to 
find configuration file at location: 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs-client/dev-support/checkstyle.xml'.
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2088
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 315089 bytes
Compression is 0.0%
Took 13 sec
Recording test results
Updating YARN-2619
Updating HDFS-8300
Updating HDFS-8292
Updating HADOOP-11891
Updating YARN-3544
Updating HADOOP-11866
Updating HDFS-8283
Updating YARN-3564
Updating HDFS-8200
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk #2112

2015-05-01 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-3564. Fix 
TestContainerAllocation.testAMContainerAllocationWhenDNSUnavailable fails 
randomly. (Jian He via wangda)

[zjshen] YARN-3544. Got back AM logs link on the RM web UI for a completed app. 
Contributed by Xuan Gong.

[wheat9] HDFS-8200. Refactor FSDirStatAndListingOp. Contributed by Haohui Mai.

[Arun Suresh] HADOOP-11891. OsSecureRandom should lazily fill its reservoir 
(asuresh)

[aw] HADOOP-11866. increase readability and reliability of checkstyle, 
shellcheck, and whitespace reports (aw)

[wang] HDFS-8292. Move conditional in fmt_time from dfs-dust.js to status.html. 
Contributed by Charles Lamb.

[jing9] HDFS-8300. Fix unit test failures and findbugs warning caused by 
HDFS-8283. Contributed by Jing Zhao.

[vinodkv] YARN-2619. Added NodeManager support for disk io isolation through 
cgroups. Contributed by Varun Vasudev and Wei Yan.

--
[...truncated 4992 lines...]
[INFO] 
[INFO] Total time: 04:12 min
[INFO] Finished at: 2015-05-01T11:34:28+00:00
[INFO] Final Memory: 227M/1546M
[INFO] 
+ cd hadoop-hdfs-project
+ /home/jenkins/tools/maven/latest/bin/mvn clean verify checkstyle:checkstyle 
findbugs:findbugs -Drequire.test.libhadoop -Pdist -Pnative -Dtar -Pdocs -fae 
-Dmaven.javadoc.skip=true
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Apache Hadoop HDFS Client
[INFO] Apache Hadoop HDFS
[INFO] Apache Hadoop HttpFS
[INFO] Apache Hadoop HDFS BookKeeper Journal
[INFO] Apache Hadoop HDFS-NFS
[INFO] Apache Hadoop HDFS Project
[INFO] 
[INFO] Using the builder 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder
 with a thread count of 1
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Client 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-client ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-client 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-hdfs-client ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hadoop-hdfs-client ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 86 source files to 

[WARNING] 
:
 

 uses or overrides a deprecated API.
[WARNING] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-hdfs-client ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-hdfs-client ---
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-hdfs-client 
---
[INFO] No tests to run.
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs-client ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ 
hadoop-hdfs-client ---
[WARNIN