[jira] [Created] (HDFS-12898) Ozone: TestSCMCli#testHelp and TestSCMCli#testListContainerCommand fail consistently

2017-12-06 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-12898:
--

 Summary: Ozone: TestSCMCli#testHelp and 
TestSCMCli#testListContainerCommand fail consistently
 Key: HDFS-12898
 URL: https://issues.apache.org/jira/browse/HDFS-12898
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: HDFS-7240
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


The help msg for SCMCl commands has been modified with HDFS-12588. SCMCLI tests 
need to be modified accordingly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12899) Ozone: SCM: BlockManagerImpl close is called twice during StorageContainerManager#stop

2017-12-06 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12899:
--

 Summary: Ozone: SCM: BlockManagerImpl close is called twice during 
StorageContainerManager#stop
 Key: HDFS-12899
 URL: https://issues.apache.org/jira/browse/HDFS-12899
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar


As part of {{StorageContainerManager#stop}}, we are calling 
{{scmBlockManager#stop}} which will internally do {{BlockManagerImpl#close}} 
and again explicitly we are calling {{scmBlockManager#close}} using 
{{IOUtils.cleanupWithLogger(LOG, scmBlockManager)}}. This causes 
{{RocksDBStore#close}} to be called twice which inturn does 
{{MBeans#unregister}} twice, resulting in the following exception trace (WARN) 
during the second call
{noformat}
2017-12-06 22:30:06,316 [main] WARN  util.MBeans (MBeans.java:unregister(137)) 
- Error unregistering Hadoop:service=Ozone,name=RocksDbStore,dbName=block.db
javax.management.InstanceNotFoundException: 
Hadoop:service=Ozone,name=RocksDbStore,dbName=block.db
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:135)
at org.apache.hadoop.utils.RocksDBStore.close(RocksDBStore.java:368)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.close(BlockManagerImpl.java:506)
at org.apache.hadoop.io.IOUtils.cleanupWithLogger(IOUtils.java:278)
at 
org.apache.hadoop.ozone.scm.StorageContainerManager.stop(StorageContainerManager.java:900)
stacktrace truncated--
2017-12-06 22:30:06,317 [main] WARN  util.MBeans (MBeans.java:unregister(137)) 
- Error unregistering 
Hadoop:service=Ozone,name=RocksDbStore,dbName=deletedBlock.db
javax.management.InstanceNotFoundException: 
Hadoop:service=Ozone,name=RocksDbStore,dbName=deletedBlock.db
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:135)
at org.apache.hadoop.utils.RocksDBStore.close(RocksDBStore.java:368)
at 
org.apache.hadoop.ozone.scm.block.DeletedBlockLogImpl.close(DeletedBlockLogImpl.java:326)
at 
org.apache.hadoop.ozone.scm.block.BlockManagerImpl.close(BlockManagerImpl.java:509)
at org.apache.hadoop.io.IOUtils.cleanupWithLogger(IOUtils.java:278)
at 
org.apache.hadoop.ozone.scm.StorageContainerManager.stop(StorageContainerManager.java:900)
stacktrace truncated--
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12900) Ozone: Add client Block cache for SCM

2017-12-06 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12900:


 Summary: Ozone: Add client Block cache for SCM
 Key: HDFS-12900
 URL: https://issues.apache.org/jira/browse/HDFS-12900
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


SCM blocks are currently allocated via the SCM block client rpc. This would 
involve multiple rpc requests for block allocation requests. This can be 
optimized by having a block cache on the client.

This cache can be used to pre-allocate multiple blocks. This layer can also be 
optimized to handle block frees as well by keeping the freed blocks in the 
cache and using them for further allocations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12901) Ozone: SCM: Expose StorageContainerManager#getScmId through container protocol

2017-12-06 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12901:
--

 Summary: Ozone: SCM: Expose StorageContainerManager#getScmId 
through container protocol
 Key: HDFS-12901
 URL: https://issues.apache.org/jira/browse/HDFS-12901
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar


This jira is to expose {{StorageContainerManager#getScmId}} through container 
protocol, now it's available only through SCM's block location protocol.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12902) IOException when running MiniDfsCluster on Windows: Failed to save in any storage directories while saving namespace

2017-12-06 Thread Jeff Saremi (JIRA)
Jeff Saremi created HDFS-12902:
--

 Summary: IOException when running MiniDfsCluster on Windows: 
Failed to save in any storage directories while saving namespace
 Key: HDFS-12902
 URL: https://issues.apache.org/jira/browse/HDFS-12902
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.4
 Environment: Windows 10
Reporter: Jeff Saremi


Error when running MiniDfsCluster on windows. I tried this first on 2.7.1. THen 
I noticed this ticket:
https://issues.apache.org/jira/browse/HDFS-11732
Hence I upgraded to 2.7.4. The same issue still persists.

ClassPath/Environment:

{{java.exe -classpath 
"share\hadoop\hdfs\hadoop-hdfs-2.7.4-tests.jar;share\hadoop\hdfs\hadoop-hdfs-2.7.4.jar;share\hadoop\common\hadoop-common-2.7.4.jar;share\hadoop\common\hadoop-common-2.7.4-tests.jar;share\hadoop\common\lib\commons-logging-1.1.3.jar;share\hadoop\common\lib\guava-11.0.2.jar;share\hadoop\common\lib\commons-collections-3.2.2.jar;share\hadoop\common\lib\commons-configuration-1.6.jar;share\hadoop\common\lib\commons-cli-1.2.jar;share\hadoop\common\lib\log4j-1.2.17.jar;share\hadoop\common\lib\slf4j-log4j12-1.7.10.jar;share\hadoop\common\lib\slf4j-api-1.7.10.jar;share\hadoop\common\lib\commons-lang-2.6.jar;share\hadoop\common\lib\hadoop-auth-2.7.4.jar;share\hadoop\common\lib\servlet-api-2.5.jar;share\hadoop\common\lib\jettison-1.1.jar;share\hadoop\common\lib\protobuf-java-2.5.0.jar;d:\;"
 MiniDfsRunner "d:/temp/hdfs2"}}

Code for MiniDfsRunner.java:

{{Configuration conf = new Configuration();
conf.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, 
baseDir.getAbsolutePath());
MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(conf);
MiniDFSCluster hdfsCluster = builder.build();}}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12903) [READ] Fix closing streams in ImageWriter

2017-12-06 Thread JIRA
Íñigo Goiri created HDFS-12903:
--

 Summary: [READ] Fix closing streams in ImageWriter
 Key: HDFS-12903
 URL: https://issues.apache.org/jira/browse/HDFS-12903
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12904) Add DataTransferThrottler to the Datanode transfers

2017-12-06 Thread JIRA
Íñigo Goiri created HDFS-12904:
--

 Summary: Add DataTransferThrottler to the Datanode transfers
 Key: HDFS-12904
 URL: https://issues.apache.org/jira/browse/HDFS-12904
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode
Reporter: Íñigo Goiri
Priority: Minor


The {{DataXceiverServer}} already uses throttling for the balancing. The 
Datanode should also allow throttling the regular data transfers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12905) [READ] Handle decommissioning and under-maintance Datanodes with Provided storage.

2017-12-06 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12905:
-

 Summary: [READ] Handle decommissioning and under-maintance 
Datanodes with Provided storage.
 Key: HDFS-12905
 URL: https://issues.apache.org/jira/browse/HDFS-12905
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12906) hedged point read in DFSInputStream sends only 1 hedge read request

2017-12-06 Thread Tao Zhang (JIRA)
Tao Zhang created HDFS-12906:


 Summary: hedged point read in DFSInputStream sends only 1 hedge 
read request
 Key: HDFS-12906
 URL: https://issues.apache.org/jira/browse/HDFS-12906
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Reporter: Tao Zhang
Assignee: Tao Zhang


Hedged point read is handled in DFSInputStream.hedgedFetchBlockByteRange(). It 
calls "getFirstToComplete()" to get the 1st returned result after sending out 
hedge read requests. But since "getFirstToComplete()" uses 
"CompletionService.take()" which is a endlessly blocking operation. It will 
wait for 1 result after sending only 1 hedge read request.

It could be changed to wait for a specific timeout (instead of infinite 
timeout) and starting another hedge read request. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: VOTE] Release Apache Hadoop 2.7.5 (RC0)

2017-12-06 Thread Erik Krogen
+1 (non-binding)

- Verified signatures, MD5, RMD160, SHA* for bin and src tarballs
- Built from source on macOS 10.12.6 and RHEL 6.6
- Ran local HDFS cluster, ran basic commands, verified read and write 
capability.
- Ran 3000 node cluster via Dynamometer and do not see significant performance 
variation from 2.7.4 expectations 

@Brahma, I was able to find HDFS-12831, HADOOP-14881, and HADOOP-14827 in 
CHANGES.txt, but agree with you on the others listed. I was, however, able to 
find all of them in the linked releasenotes.html.

Thanks Konstantin!

Erik

On 12/4/17, 10:50 PM, "Brahma Reddy Battula"  
wrote:

+1  (non-binding), thanks Konstantin for driving this.


--Built from the source
--Installed 3 Node HA Cluster
--Ran basic shell commands
--Verified append/snapshot/truncate
--Ran sample jobs like pi,wordcount


Looks follow commits are missed in changes.txt.

MAPREDUCE-6975
HADOOP-14919
HDFS-12596
YARN-7084
HADOOP-14881
HADOOP-14827
HDFS-12832


--Brahma Reddy Battula

-Original Message-
From: Konstantin Shvachko [mailto:shv.had...@gmail.com] 
Sent: 02 December 2017 10:13
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: VOTE] Release Apache Hadoop 2.7.5 (RC0)

Hi everybody,

This is the next dot release of Apache Hadoop 2.7 line. The previous one
2.7.4 was release August 4, 2017.
Release 2.7.5 includes critical bug fixes and optimizations. See more 
details in Release Note:
http://home.apache.org/~shv/hadoop-2.7.5-RC0/releasenotes.html

The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC0/

Please give it a try and vote on this thread. The vote will run for 5 days 
ending 12/08/2017.

My up to date public key is available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Thanks,
--Konstantin




[jira] [Created] (HDFS-12907) Allow read-only access to reserved raw for non-superusers

2017-12-06 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-12907:
--

 Summary: Allow read-only access to reserved raw for non-superusers
 Key: HDFS-12907
 URL: https://issues.apache.org/jira/browse/HDFS-12907
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Daryn Sharp


HDFS-6509 added a special /.reserved/raw path prefix to access the raw file 
contents of EZ files.  In the simplest sense it doesn't return the FE info in 
the {{LocatedBlocks}} so the dfs client doesn't try to decrypt the data.  This 
facilitates allowing tools like distcp to copy raw bytes.

Access to the raw hierarchy is restricted to superusers.  This seems like an 
overly broad restriction designed to prevent non-admins from munging the EZ 
related xattrs.  I believe we should relax the restriction to allow non-admins 
to perform read-only operations.  Allowing non-superusers to easily read the 
raw bytes will be extremely useful for regular users, esp. for enabling webhdfs 
client-side encryption.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: VOTE] Release Apache Hadoop 2.7.5 (RC0)

2017-12-06 Thread Naganarasimha Garla
Thanks for the release Konstantin.

Verified the following:
- Downloaded the tar on Ubuntu and verified the signatures
- Deployed pseudo cluster
- Sanity checks
- Basic hdfs operations
- Spark PyWordcount & few MR jobs
- Accessed most of the web UI's

when accessing the docs(from the tar) was able to notice :
-  Release Notes, Common, HDFS, MapReduce Changes showing file not
found
-  I observed that changes for all components were not available
for 2.7.4 as well (
http://hadoop.apache.org/docs/r2.7.4/hadoop-project-dist/hadoop-common/CHANGES.txt
)

So not sure whether its missed or not required, else everything else is
fine.

Regards,
+ Naga


On Tue, Dec 5, 2017 at 2:50 PM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> +1  (non-binding), thanks Konstantin for driving this.
>
>
> --Built from the source
> --Installed 3 Node HA Cluster
> --Ran basic shell commands
> --Verified append/snapshot/truncate
> --Ran sample jobs like pi,wordcount
>
>
> Looks follow commits are missed in changes.txt.
>
> MAPREDUCE-6975
> HADOOP-14919
> HDFS-12596
> YARN-7084
> HADOOP-14881
> HADOOP-14827
> HDFS-12832
>
>
> --Brahma Reddy Battula
>
> -Original Message-
> From: Konstantin Shvachko [mailto:shv.had...@gmail.com]
> Sent: 02 December 2017 10:13
> To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
> Subject: VOTE] Release Apache Hadoop 2.7.5 (RC0)
>
> Hi everybody,
>
> This is the next dot release of Apache Hadoop 2.7 line. The previous one
> 2.7.4 was release August 4, 2017.
> Release 2.7.5 includes critical bug fixes and optimizations. See more
> details in Release Note:
> http://home.apache.org/~shv/hadoop-2.7.5-RC0/releasenotes.html
>
> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC0/
>
> Please give it a try and vote on this thread. The vote will run for 5 days
> ending 12/08/2017.
>
> My up to date public key is available from:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Thanks,
> --Konstantin
>


Re: [VOTE] Merge Absolute resource configuration support in Capacity Scheduler (YARN-5881) to trunk

2017-12-06 Thread Subramaniam V K
+1.

Skimmed through the design doc and uber patch and seems to be reasonable.

This is a welcome addition especially w.r.t. cloud deployments so thanks to
everyone who worked on this.

On Mon, Dec 4, 2017 at 8:18 PM, Rohith Sharma K S  wrote:

> +1
>
> On Nov 30, 2017 7:26 AM, "Sunil G"  wrote:
>
> > Hi All,
> >
> >
> > Based on the discussion at [1], I'd like to start a vote to merge feature
> > branch
> >
> > YARN-5881 to trunk. Vote will run for 7 days, ending Wednesday Dec 6 at
> > 6:00PM PDT.
> >
> >
> > This branch adds support to configure queue capacity as absolute resource
> > in
> >
> > capacity scheduler. This will help admins who want fine control of
> > resources of queues.
> >
> >
> > Feature development is done at YARN-5881 [2], jenkins build is here
> > (YARN-7510 [3]).
> >
> > All required tasks for this feature are committed. This feature changes
> > RM’s Capacity Scheduler only,
> >
> > and we did extensive tests for the feature in the last couple of months
> > including performance tests.
> >
> >
> > Key points:
> >
> > - The feature is turned off by default, and have to configure absolute
> > resource to enable same.
> >
> > - Detailed documentation about how to use this feature is done as part of
> > [4].
> >
> > - No major performance degradation is observed with this branch work. SLS
> > and UT performance
> >
> > tests are done.
> >
> >
> > There were 11 subtasks completed for this feature.
> >
> >
> > Huge thanks to everyone who helped with reviews, commits, guidance, and
> >
> > technical discussion/design, including Wangda Tan, Vinod Vavilapalli,
> > Rohith Sharma K S, Eric Payne .
> >
> >
> > [1] :
> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201711.mbox/%
> > 3CCACYiTuhKhF1JCtR7ZFuZSEKQ4sBvN_n_tV5GHsbJ3YeyJP%2BP4Q%
> > 40mail.gmail.com%3E
> >
> > [2] : https://issues.apache.org/jira/browse/YARN-5881
> >
> > [3] : https://issues.apache.org/jira/browse/YARN-7510
> >
> > [4] : https://issues.apache.org/jira/browse/YARN-7533
> >
> >
> > Regards
> >
> > Sunil and Wangda
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-06 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/

[Dec 4, 2017 6:40:11 PM] (xiao) HDFS-12396. Webhdfs file system should get 
delegation token from kms
[Dec 4, 2017 8:11:00 PM] (eyang) YARN-6669.  Implemented Kerberos security for 
YARN service framework. 
[Dec 4, 2017 9:14:55 PM] (rkanter) YARN-5594. Handle old RMDelegationToken 
format when recovering RM
[Dec 4, 2017 10:39:43 PM] (mackrorysd) HADOOP-15058. create-release site build 
outputs dummy shaded jars due to
[Dec 5, 2017 5:02:04 AM] (arp) HADOOP-14976. Set HADOOP_SHELL_EXECNAME 
explicitly in scripts.
[Dec 5, 2017 5:30:46 AM] (aajisaka) HADOOP-14985. Remove subversion related 
code from VersionInfoMojo.java.
[Dec 5, 2017 12:58:31 PM] (sunilg) YARN-7586. Application Placement should be 
done before ACL checks in
[Dec 5, 2017 2:11:07 PM] (sunilg) YARN-7092. Render application specific log 
under application tab in new
[Dec 5, 2017 2:23:46 PM] (brahma) HDFS-11751. DFSZKFailoverController daemon 
exits with wrong status code.
[Dec 5, 2017 3:05:41 PM] (stevel) HADOOP-15071 S3a troubleshooting docs to add 
a couple more failure
[Dec 5, 2017 5:20:07 PM] (sunilg) YARN-7438. Additional changes to make 
SchedulingPlacementSet agnostic to
[Dec 5, 2017 7:06:32 PM] (fabbri) HADOOP-14475 Metrics of S3A don't print out 
when enabled. Contributed by
[Dec 5, 2017 9:09:49 PM] (wangda) YARN-7381. Enable the configuration:
[Dec 6, 2017 2:40:33 AM] (aajisaka) HDFS-12889. Router UI is missing robots.txt 
file. Contributed by Bharat
[Dec 6, 2017 4:01:36 AM] (zhengkai.zk) HADOOP-15039. Move 
SemaphoredDelegatingExecutor to hadoop-common.
[Dec 6, 2017 4:21:52 AM] (wwei) YARN-7611. Node manager web UI should display 
container type in
[Dec 6, 2017 4:48:16 AM] (xiao) HDFS-12872. EC Checksum broken when 
BlockAccessToken is enabled.
[Dec 6, 2017 9:52:41 AM] (wwei) YARN-7610. Extend Distributed Shell to support 
launching job with




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 
   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 
   hadoop.fs.TestUnbuffer 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 
   hadoop.yarn.client.api.impl.TestAMRMClientOnRMRestart 
   hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator 
   hadoop.mapreduce.v2.TestUberAM 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/whitespace-eol.txt
  [8.8M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/whitespace-tabs.txt
  [288K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/614/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-