Re: Two AMs in one YARN container?

2017-03-17 Thread Sergiy Matusevych
On Fri, Mar 17, 2017 at 4:15 PM, Subru Krishnan  wrote:


> Thanks Arun for the heads-up.
>
> Hi Sergiy,
>
> We do run an UAM pool under one process (AMRMProxyService in NM) as that's
> the mechanism we use to span a single job across multiple clusters that are
> under federation. This is achieved by using the doAs method in
> UserGroupInformation, exactly as Jason pointed out.
>
> The e2e *prototype* code (and docs/slides) is available in the Federation
> umbrella jira:
> https://issues.apache.org/jira/browse/YARN-2915
>
> I have created a utility class that's used throughout YARN Federation to
> create RMProxies per UGI - FederationProxyProviderUtil
>  yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-
> yarn-server-common/src/main/java/org/apache/hadoop/yarn/
> server/federation/failover/FederationProxyProviderUtil.java>
> (as part of YARN-3673 ),
> which should provide a good starting point for you.
>
> You should also keep an eye on UAM pool JIRA which Botong is working on
> right now:
> https://issues.apache.org/jira/browse/YARN-5531



Hi YARN devs,

*Huge* thanks for your help! If I understand you correctly, that means I do
not need any changes to YARN client API to run multiple AMs in one process
- an excellent news!

I will study the federation code and try that technique in REEF. I'll let
you know how it goes.

Again, thanks a lot Subru, Arun, and Jason -- you guys are awesome :)

Cheers,
Sergiy.



> On Thu, Mar 16, 2017 at 2:49 PM, Arun Suresh 
> wrote:
>
> > Hey Sergiy,
> >
> > I think a similar approach IIUC, where an AM for a app running on a
> > cluster acts as an unmanaged AM on another cluster. I believe they use a
> > separate UGI for each sub-cluster and wrap it around a doAs before the
> > actual allocate call.
> >
> > Subru might be able to give more details.
> >
> > Cheers
> > -Arun
> >
> > On Thu, Mar 16, 2017 at 2:34 PM, Jason Lowe  >
> > wrote:
> >
> >> The doAs method in UserGroupInformation is what you want when dealing
> >> with multiple UGIs.  It determines what UGI instance the code within the
> >> doAs scope gets when that code tries to lookup the current user.
> >> Each AM is designed to run in a separate JVM, so each has some
> >> main()-like entry point that does everything to setup the AM.
> >> Theoretically all you need to do is create two, separate UGIs then use
> each
> >> instance to perform a doAs wrapping the invocation of the corresponding
> >> AM's entry point.  After that, everything that AM does will get the UGI
> of
> >> the doAs invocation as the current user.  Since the AMs are running in
> >> separate doAs instances they will get separate UGIs for the current user
> >> and thus separate credentials.
> >> Jason
> >>
> >>
> >> On Thursday, March 16, 2017 4:03 PM, Sergiy Matusevych <
> >> sergiy.matusev...@gmail.com> wrote:
> >>
> >>
> >>  Hi Jason,
> >>
> >> Thanks a lot for your help again! Having two separate
> >> UserGroupInformation instances is exactly what I had in mind. What I do
> not
> >> understand, though, is how to make sure that our second call to
> >> .regsiterApplicationMaster() will pick the right UserGroupInformation
> >> object. I would love to find a way that does not involve any changes to
> the
> >> YARN client, but if we have to patch it, of course, I agree that we
> need to
> >> have a generic yet minimally invasive solution.
> >> Thank you!Sergiy.
> >>
> >>
> >> On Thu, Mar 16, 2017 at 8:03 AM, Jason Lowe 
> wrote:
> >> >
> >> > I believe a cleaner way to solve this problem is to create two,
> >> _separate_ UserGroupInformation objects and wrap each AM instances in a
> UGI
> >> doAs so they aren't trying to share the same credentials.  This is one
> >> example of a token bleeding over and causing problems. I suspect trying
> to
> >> fix these one-by-one as they pop up is going to be frustrating compared
> to
> >> just ensuring the credentials remain separate as if they really were
> >> running in separate JVMs.
> >> >
> >> > Adding Daryn who knows a lot more about the UGI stuff so he can
> correct
> >> any misunderstandings on my part.
> >> >
> >> > Jason
> >> >
> >> >
> >> > On Wednesday, March 15, 2017 1:11 AM, Sergiy Matusevych <
> >> sergiy.matusev...@gmail.com> wrote:
> >> >
> >> >
> >> > Hi YARN developers,
> >> >
> >> > I have an interesting problem that I think is related to YARN Java
> >> client.
> >> > I am trying to launch *two* application masters in one container. To
> be
> >> > more specific, I am starting a Spark job on YARN, and launch an Apache
> >> REEF
> >> > Unmanaged AM from the Spark Driver.
> >> >
> >> > Technically, YARN Resource Manager should not care which process each
> AM
> >> > runs in. However, there is a problem with the YARN Java client
> >> > implementation: there is a global UserGroupInformation object that
> holds
> >> > the user credentials of the current RM session. This data structure i

Re: Two AMs in one YARN container?

2017-03-17 Thread Subru Krishnan
Thanks Arun for the heads-up.

Hi Sergiy,

We do run an UAM pool under one process (AMRMProxyService in NM) as that's
the mechanism we use to span a single job across multiple clusters that are
under federation. This is achieved by using the doAs method in
UserGroupInformation, exactly as Jason pointed out.

The e2e *prototype* code (and docs/slides) is available in the Federation
umbrella jira:
https://issues.apache.org/jira/browse/YARN-2915

I have created a utility class that's used throughout YARN Federation to
create RMProxies per UGI - FederationProxyProviderUtil

(as part of YARN-3673 ),
which should provide a good starting point for you.

You should also keep an eye on UAM pool JIRA which Botong is working on
right now:
https://issues.apache.org/jira/browse/YARN-5531

-Subru


On Thu, Mar 16, 2017 at 2:49 PM, Arun Suresh  wrote:

> Hey Sergiy,
>
> I think a similar approach IIUC, where an AM for a app running on a
> cluster acts as an unmanaged AM on another cluster. I believe they use a
> separate UGI for each sub-cluster and wrap it around a doAs before the
> actual allocate call.
>
> Subru might be able to give more details.
>
> Cheers
> -Arun
>
> On Thu, Mar 16, 2017 at 2:34 PM, Jason Lowe 
> wrote:
>
>> The doAs method in UserGroupInformation is what you want when dealing
>> with multiple UGIs.  It determines what UGI instance the code within the
>> doAs scope gets when that code tries to lookup the current user.
>> Each AM is designed to run in a separate JVM, so each has some
>> main()-like entry point that does everything to setup the AM.
>> Theoretically all you need to do is create two, separate UGIs then use each
>> instance to perform a doAs wrapping the invocation of the corresponding
>> AM's entry point.  After that, everything that AM does will get the UGI of
>> the doAs invocation as the current user.  Since the AMs are running in
>> separate doAs instances they will get separate UGIs for the current user
>> and thus separate credentials.
>> Jason
>>
>>
>> On Thursday, March 16, 2017 4:03 PM, Sergiy Matusevych <
>> sergiy.matusev...@gmail.com> wrote:
>>
>>
>>  Hi Jason,
>>
>> Thanks a lot for your help again! Having two separate
>> UserGroupInformation instances is exactly what I had in mind. What I do not
>> understand, though, is how to make sure that our second call to
>> .regsiterApplicationMaster() will pick the right UserGroupInformation
>> object. I would love to find a way that does not involve any changes to the
>> YARN client, but if we have to patch it, of course, I agree that we need to
>> have a generic yet minimally invasive solution.
>> Thank you!Sergiy.
>>
>>
>> On Thu, Mar 16, 2017 at 8:03 AM, Jason Lowe  wrote:
>> >
>> > I believe a cleaner way to solve this problem is to create two,
>> _separate_ UserGroupInformation objects and wrap each AM instances in a UGI
>> doAs so they aren't trying to share the same credentials.  This is one
>> example of a token bleeding over and causing problems. I suspect trying to
>> fix these one-by-one as they pop up is going to be frustrating compared to
>> just ensuring the credentials remain separate as if they really were
>> running in separate JVMs.
>> >
>> > Adding Daryn who knows a lot more about the UGI stuff so he can correct
>> any misunderstandings on my part.
>> >
>> > Jason
>> >
>> >
>> > On Wednesday, March 15, 2017 1:11 AM, Sergiy Matusevych <
>> sergiy.matusev...@gmail.com> wrote:
>> >
>> >
>> > Hi YARN developers,
>> >
>> > I have an interesting problem that I think is related to YARN Java
>> client.
>> > I am trying to launch *two* application masters in one container. To be
>> > more specific, I am starting a Spark job on YARN, and launch an Apache
>> REEF
>> > Unmanaged AM from the Spark Driver.
>> >
>> > Technically, YARN Resource Manager should not care which process each AM
>> > runs in. However, there is a problem with the YARN Java client
>> > implementation: there is a global UserGroupInformation object that holds
>> > the user credentials of the current RM session. This data structure is
>> > shared by all AMs, and when REEF application tries to register the
>> second
>> > (unmanaged) AM, the client library presents to YARN RM all credentials,
>> > including the security token of the first (managed) AM. YARN rejects
>> such
>> > registration request, throwing InvalidApplicationMasterRequestException
>> > "Application Master is already registered".
>> >
>> > I feel like this issue can be resolved by a relatively small update to
>> the
>> > YARN Java client - e.g. by introducing a new variant of the
>> > AMRMClientAsync.registerApplicationMaster() that would take the
>> required
>> > security token (instead of getting it implicitly from t

[jira] [Created] (YARN-6363) Extending SLS: Synthetic Load Generator

2017-03-17 Thread Carlo Curino (JIRA)
Carlo Curino created YARN-6363:
--

 Summary: Extending SLS: Synthetic Load Generator
 Key: YARN-6363
 URL: https://issues.apache.org/jira/browse/YARN-6363
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Carlo Curino
Assignee: Carlo Curino


This JIRA tracks the introduction of a synthetic load generator in the SLS. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Jason Lowe
+1 (binding)
- Verfied signatures and digests- Performed a native build from the release 
tag- Deployed to a single node cluster- Ran some sample jobs
Jason
 

On Friday, March 17, 2017 4:18 AM, Junping Du  wrote:
 

 Hi all,
    With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

    This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

      More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

      New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

      The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

      The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

      Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping

   

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Mingliang Liu
Thanks Junping for doing this.

+1 (non-binding)

0. Download the src tar.gz file; checked the MD5 checksum
1. Build Hadoop from source successfully
2. Deploy a single node cluster and start the cluster successfully
3. Operate the HDFS from command line: ls, put, distcp, dfsadmin etc
4. Run hadoop mapreduce examples: grep
5. Operate AWS S3 using S3A schema from commandline: ls, cat, distcp
6. Check the HDFS service logs

L

> On Mar 17, 2017, at 2:18 AM, Junping Du  wrote:
> 
> Hi all,
> With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
> for Apache Hadoop 2.8.0.
> 
> This is the next minor release to follow up 2.7.0 which has been released 
> for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
> features. Most of these commits are released for the first time in branch-2.
> 
>  More information about the 2.8.0 release plan can be found here: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> 
>  New RC is available at: 
> http://home.apache.org/~junping_du/hadoop-2.8.0-RC3
> 
>  The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
> 91f2b7a13d1e97be65db92ddabc627cc29ac0009
> 
>  The maven artifacts are available via repository.apache.org at: 
> https://repository.apache.org/content/repositories/orgapachehadoop-1057
> 
>  Please try the release and vote; the vote will run for the usual 5 days, 
> ending on 03/22/2017 PDT time.
> 
> Thanks,
> 
> Junping



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Daniel Templeton
Thanks for the new RC, Junping.  I built from source and tried it out on 
a 2-node cluster with HA enabled.  I ran a pi job and some streaming 
jobs.  I tested that localization and failover work correctly, and I 
played a little with the YARN and HDFS web UIs.


I did encounter an old friend of mine, which is that if you submit a 
streaming job with input that is only 1 block, you will nonetheless get 
2 mappers that both process the same split. What's new this time is that 
the second mapper was consistently failing on certain input sizes.  I 
(re)verified that the issue also exists is 2.7.3, so it's not a 
regression.  I'm pretty sure it's been there since at least 2.6.0.  I 
filed MAPREDUCE-6864 for it.


Given that my issue was not a regression, I'm +1 on the RC.

Daniel

On 3/17/17 2:18 AM, Junping Du wrote:

Hi all,
  With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

  This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

   More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

   New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

   The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

   The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

   Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping




-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Miklos Szegedi
Hi Junping,

Thank you for working on this.

+1 (Non-Binding)

I verified the following:
1. Deployed on a 3 node cluster with 2 node managers.
2. Configured linux container executor
3. Configured fair scheduler
4. Ran Pi job and verified the results
5. Ran multiple Yarn applications and verified the results

Thank you,
Miklos Szegedi

On Fri, Mar 17, 2017 at 2:18 AM, Junping Du  wrote:

> Hi all,
>  With fix of HDFS-11431 get in, I've created a new release candidate
> (RC3) for Apache Hadoop 2.8.0.
>
>  This is the next minor release to follow up 2.7.0 which has been
> released for more than 1 year. It comprises 2,900+ fixes, improvements, and
> new features. Most of these commits are released for the first time in
> branch-2.
>
>   More information about the 2.8.0 release plan can be found here:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>
>   New RC is available at: http://home.apache.org/~
> junping_du/hadoop-2.8.0-RC3
>
>   The RC tag in git is: release-2.8.0-RC3, and the latest commit id
> is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1057
>
>   Please try the release and vote; the vote will run for the usual 5
> days, ending on 03/22/2017 PDT time.
>
> Thanks,
>
> Junping
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/

[Mar 16, 2017 2:08:30 PM] (stevel) HDFS-11431. hadoop-hdfs-client JAR does not 
include
[Mar 16, 2017 2:30:10 PM] (jlowe) YARN-4051. ContainerKillEvent lost when 
container is still recovering
[Mar 16, 2017 3:54:59 PM] (kihwal) HDFS-10601. Improve log message to include 
hostname when the NameNode is
[Mar 16, 2017 7:06:51 PM] (stevel) Revert "HDFS-11431. hadoop-hdfs-client JAR 
does not include
[Mar 16, 2017 7:20:46 PM] (jitendra) HDFS-11533. reuseAddress option should be 
used for child channels in
[Mar 16, 2017 10:07:38 PM] (wang) HDFS-10530. BlockManager reconstruction work 
scheduling should correctly
[Mar 16, 2017 11:08:32 PM] (liuml07) HADOOP-14191. Duplicate hadoop-minikdc 
dependency in hadoop-common
[Mar 17, 2017 1:13:43 AM] (arp) HDFS-10394. move declaration of okhttp version 
from hdfs-client to




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-compile-root.txt
  [132K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-compile-root.txt
  [132K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-compile-root.txt
  [132K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [136K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [236K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/260/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-03-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/

[Mar 16, 2017 2:08:30 PM] (stevel) HDFS-11431. hadoop-hdfs-client JAR does not 
include
[Mar 16, 2017 2:30:10 PM] (jlowe) YARN-4051. ContainerKillEvent lost when 
container is still recovering
[Mar 16, 2017 3:54:59 PM] (kihwal) HDFS-10601. Improve log message to include 
hostname when the NameNode is
[Mar 16, 2017 7:06:51 PM] (stevel) Revert "HDFS-11431. hadoop-hdfs-client JAR 
does not include
[Mar 16, 2017 7:20:46 PM] (jitendra) HDFS-11533. reuseAddress option should be 
used for child channels in
[Mar 16, 2017 10:07:38 PM] (wang) HDFS-10530. BlockManager reconstruction work 
scheduling should correctly
[Mar 16, 2017 11:08:32 PM] (liuml07) HADOOP-14191. Duplicate hadoop-minikdc 
dependency in hadoop-common
[Mar 17, 2017 1:13:43 AM] (arp) HDFS-10394. move declaration of okhttp version 
from hdfs-client to




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.namenode.ha.TestHAAppend 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-compile-javac-root.txt
  [180K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [296K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [60K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/348/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Created] (YARN-6362) Build failure of yarn-ui profile

2017-03-17 Thread Kai Sasaki (JIRA)
Kai Sasaki created YARN-6362:


 Summary: Build failure of yarn-ui profile
 Key: YARN-6362
 URL: https://issues.apache.org/jira/browse/YARN-6362
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Kai Sasaki
Assignee: Kai Sasaki


Building yarn-ui module fails due to invalid npm-cli.js path.

{code}
[INFO] --- exec-maven-plugin:1.3.1:exec (ember build) @ hadoop-yarn-ui ---
module.js:327
throw err;
^

Error: Cannot find module 
'/Users/sasakikai/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/src/main/webapp/node/npm/bin/npm-cli'
at Function.Module._resolveFilename (module.js:325:15)
at Function.Module._load (module.js:276:25)
at Function.Module.runMain (module.js:441:10)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-17 Thread Junping Du
Hi all,
 With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

 This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

  More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

  New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

  The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

  The maven artifacts are available via repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

  Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping