[jira] [Created] (YARN-6348) Add placement constraints to the resource requests

2017-03-15 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-6348:


 Summary: Add placement constraints to the resource requests
 Key: YARN-6348
 URL: https://issues.apache.org/jira/browse/YARN-6348
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Konstantinos Karanasos


This JIRA will allow specifying placement constraints (e.g., affinity and 
anti-affinity) within the resource request.
This constraint expression should have the same form as the one specified in 
the ApplicationSubmissionContext (YARN-6346).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-6347) Store container tags in ResourceManager

2017-03-15 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-6347:


 Summary: Store container tags in ResourceManager
 Key: YARN-6347
 URL: https://issues.apache.org/jira/browse/YARN-6347
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: resourcemanager
Reporter: Konstantinos Karanasos


In YARN-6345 we introduce the notion of container tags.
In this JIRA, we will create a service in the RM, similar to the Node Labels 
Manager, that will store the tags of each active container.

Note that a node inherits the tags of all containers that are running on that 
node at each moment. Therefore, the container tags can be seen as dynamic node 
labels. The suggested service will allow us to efficiently retrieve the 
container tags of each node.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

2017-03-15 Thread Junping Du
bq. From my read of the poms, hadoop-client depends on hadoop-hdfs-client to 
pull in HDFS-related code. It doesn't have its own dependency on hadoop-hdfs. 
So I think this affects users of the hadoop-client artifact, which has existed 
for a long time.

I could miss that. Thanks for reminding! From my quick check: 
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client/2.7.3?, it 
sounds like 669 artifacts from other projects were depending on it.


I think we should withdraw the current RC bits. Please stop the verification & 
vote.

I will kick off another RC immediately when HDFS-11431 get fixed.


Thanks,


Junping



From: Andrew Wang 
Sent: Wednesday, March 15, 2017 2:04 PM
To: Junping Du
Cc: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

Hi Junping, inline,


>From my understanding, this issue is related to our previous improvements with 
>separating client and server jars in HDFS-6200. If we use the new "client" jar 
>in NN HA deployment, then we will hit the issue reported.

>From my read of the poms, hadoop-client depends on hadoop-hdfs-client to pull 
>in HDFS-related code. It doesn't have its own dependency on hadoop-hdfs. So I 
>think this affects users of the hadoop-client artifact, which has existed for 
>a long time.

Essentially all of our customer deployments run with NN HA, so this would 
affect a lot of users.

I can see two options here:

- Without any change in 2.8.0, if user hit the issue when they deploy HA 
cluster by using new client jar, adding back hdfs jar just like how things work 
previously

- Make the change now in 2.8.0, either moving ConfiguredFailoverProxyProvider 
to client jar or adding dependency between client jar and server jar. There 
must be some arguments there on which way to fix is better especially 
ConfiguredFailoverProxyProvider still has some sever side dependencies.


I would prefer the first option, given:

- The issue fixing time is unpredictable as there are still discussion on how 
to fix this issue. Our 2.8.0 release shouldn't be an endless journey which has 
been deferred several times for more serious issue.

Looks like we have a patch being actively revved and reviewed to fix this by 
making hadoop-hdfs-client depend on hadoop-hdfs. Thanks to Steven and Steve for 
working on this.

Steve proposed doing a proper split in a later JIRA.

- We have workaround for this improvement, no regression happens due to this 
issue. People can still use hdfs jar in old way. The worst case is improvement 
for HDFS doesn't work in some cases - that shouldn't block the whole release.

Based on the above, I think there is a regression for users of the 
hadoop-client artifact.

If it actually only affects users of hadoop-hdfs-client, then I agree we can 
document it as a Known Issue and fix it later.

Best,
Andrew


[jira] [Created] (YARN-6346) Expose API in ApplicationSubmissionContext to specify placement constraints

2017-03-15 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-6346:


 Summary: Expose API in ApplicationSubmissionContext to specify 
placement constraints
 Key: YARN-6346
 URL: https://issues.apache.org/jira/browse/YARN-6346
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, yarn
Reporter: Konstantinos Karanasos


We propose to extend the API of the {{ApplicationSubmissionContext}} to be able 
to express placement constraints (e.g., affinity and anti-affinity) when an 
application gets submitted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-6345) Add container tags to resource requests

2017-03-15 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-6345:


 Summary: Add container tags to resource requests
 Key: YARN-6345
 URL: https://issues.apache.org/jira/browse/YARN-6345
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Konstantinos Karanasos


This JIRA introduces the notion of container tags.
When an application submits container requests, it is allowed to attach to them 
a set of string tags. The corresponding resource requests will also carry these 
tags.
For example, a container that will be used for running an HBase Master can be 
marked with the tag "hb-m". Another one belonging to a ZooKeeper application, 
can be marked as "zk".
Through container tags, we will be able to express constraints that refer to 
containers with the given tags.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

2017-03-15 Thread Andrew Wang
Hi Junping, inline,

>From my understanding, this issue is related to our previous
> improvements with separating client and server jars in HDFS-6200. If we use
> the new "client" jar in NN HA deployment, then we will hit the issue
> reported.
>
>From my read of the poms, hadoop-client depends on hadoop-hdfs-client to
pull in HDFS-related code. It doesn't have its own dependency on
hadoop-hdfs. So I think this affects users of the hadoop-client artifact,
which has existed for a long time.

Essentially all of our customer deployments run with NN HA, so this would
affect a lot of users.

> I can see two options here:
>
> - Without any change in 2.8.0, if user hit the issue when they deploy HA
> cluster by using new client jar, adding back hdfs jar just like how things
> work previously
>
> - Make the change now in 2.8.0, either moving
> ConfiguredFailoverProxyProvider to client jar or adding dependency
> between client jar and server jar. There must be some arguments there on
> which way to fix is better especially ConfiguredFailoverProxyProvider
> still has some sever side dependencies.
>
>
> I would prefer the first option, given:
>
> - The issue fixing time is unpredictable as there are still discussion on
> how to fix this issue. Our 2.8.0 release shouldn't be an endless journey
> which has been deferred several times for more serious issue.
>
Looks like we have a patch being actively revved and reviewed to fix this
by making hadoop-hdfs-client depend on hadoop-hdfs. Thanks to Steven and
Steve for working on this.

Steve proposed doing a proper split in a later JIRA.

> - We have workaround for this improvement, no regression happens due to
> this issue. People can still use hdfs jar in old way. The worst case
> is improvement for HDFS doesn't work in some cases - that shouldn't block
> the whole release.
>
Based on the above, I think there is a regression for users of the
hadoop-client artifact.

If it actually only affects users of hadoop-hdfs-client, then I agree we
can document it as a Known Issue and fix it later.

Best,
Andrew


[jira] [Created] (YARN-6344) Rethinking OFF_SWITCH locality in CapacityScheduler

2017-03-15 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-6344:


 Summary: Rethinking OFF_SWITCH locality in CapacityScheduler
 Key: YARN-6344
 URL: https://issues.apache.org/jira/browse/YARN-6344
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacityscheduler
Reporter: Konstantinos Karanasos


When relaxing locality from node to rack, the {{node-locality-parameter}} is 
used: when scheduling opportunities for a scheduler key are more than the value 
of this parameter, we relax locality and try to assign the container to a node 
in the corresponding rack.

On the other hand, when relaxing locality to off-switch (i.e., assign the 
container anywhere in the cluster), we are using a {{localityWaitFactor}}, 
which is computed based on the number of outstanding requests for a specific 
scheduler key, which is divided by the size of the cluster. 
In case of applications that request containers in big batches (e.g., 
traditional MR jobs), and for relatively small clusters, the localityWaitFactor 
does not affect relaxing locality much.
However, in case of applications that request containers in small batches, this 
load factor takes a very small value, which leads to assigning off-switch 
containers too soon. This situation is even more pronounced in big clusters.
For example, if an application requests only one container per request, the 
locality will be relaxed after a single missed scheduling opportunity.

The purpose of this JIRA is to rethink the way we are relaxing locality for 
off-switch assignments.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-03-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/

[Mar 14, 2017 6:47:25 PM] (wang) HDFS-11505. Do not enable any erasure coding 
policies by default.
[Mar 14, 2017 7:52:25 PM] (naganarasimha_gr) YARN-6327. Removing queues from 
CapacitySchedulerQueueManager and
[Mar 14, 2017 7:58:12 PM] (junping_du) YARN-6313. YARN logs cli should provide 
logs for a completed container
[Mar 14, 2017 8:03:42 PM] (liuml07) Revert "HADOOP-14170. 
FileSystemContractBaseTest is not cleaning up test
[Mar 14, 2017 9:38:21 PM] (liuml07) HADOOP-14170. FileSystemContractBaseTest is 
not cleaning up test
[Mar 14, 2017 10:09:47 PM] (rchiang) YARN-6331. Fix flakiness in 
TestFairScheduler#testDumpState. (Yufei Gu
[Mar 14, 2017 11:41:10 PM] (wang) HDFS-9705. Refine the behaviour of 
getFileChecksum when length = 0.
[Mar 15, 2017 9:18:05 AM] (sunilg) YARN-6328. Fix a spelling mistake in 
CapacityScheduler. Contributed by
[Mar 15, 2017 10:05:03 AM] (yqlin) HDFS-11420. Edit file should not be 
processed by the same type processor
[Mar 15, 2017 10:24:09 AM] (rohithsharmaks) YARN-6336. Jenkins report YARN new 
UI build failure. Contributed by




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-compile-root.txt
  [136K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-compile-root.txt
  [136K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-compile-root.txt
  [136K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [276K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/258/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-03-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/

[Mar 14, 2017 9:49:48 AM] (yqlin) HDFS-11526. Fix confusing block recovery 
message. Contributed by Yiqun
[Mar 14, 2017 9:58:07 AM] (junping_du) YARN-6314. Potential infinite 
redirection on YARN log redirection web
[Mar 14, 2017 6:47:25 PM] (wang) HDFS-11505. Do not enable any erasure coding 
policies by default.
[Mar 14, 2017 7:52:25 PM] (naganarasimha_gr) YARN-6327. Removing queues from 
CapacitySchedulerQueueManager and
[Mar 14, 2017 7:58:12 PM] (junping_du) YARN-6313. YARN logs cli should provide 
logs for a completed container
[Mar 14, 2017 8:03:42 PM] (liuml07) Revert "HADOOP-14170. 
FileSystemContractBaseTest is not cleaning up test
[Mar 14, 2017 9:38:21 PM] (liuml07) HADOOP-14170. FileSystemContractBaseTest is 
not cleaning up test
[Mar 14, 2017 10:09:47 PM] (rchiang) YARN-6331. Fix flakiness in 
TestFairScheduler#testDumpState. (Yufei Gu
[Mar 14, 2017 11:41:10 PM] (wang) HDFS-9705. Refine the behaviour of 
getFileChecksum when length = 0.




-1 overall


The following subsystems voted -1:
asflicense compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeLifeline 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.mapred.TestMRTimelineEventHandling 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-compile-root.txt
  [208K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-compile-root.txt
  [208K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-compile-root.txt
  [208K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [264K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [32K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [88K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/346/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

[jira] [Created] (YARN-6343) Docker docs MR example is broken

2017-03-15 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6343:
--

 Summary: Docker docs MR example is broken
 Key: YARN-6343
 URL: https://issues.apache.org/jira/browse/YARN-6343
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.9.0, 3.0.0-alpha3
Reporter: Daniel Templeton
Assignee: Daniel Templeton


In the example, the -D args come before pi, but it should be the other way 
around.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

2017-03-15 Thread Steve Loughran

> On 15 Mar 2017, at 00:36, Junping Du  wrote:
> 
> Thanks Andrew for reporting the issue. This JIRA is out of my radar as it? 
> didn't specify any target version before.
> 
> 
> From my understanding, this issue is related to our previous improvements 
> with separating client and server jars in HDFS-6200. If we use the new 
> "client" jar in NN HA deployment, then we will hit the issue reported.
> 
> 
> I can see two options here:
> 
> - Without any change in 2.8.0, if user hit the issue when they deploy HA 
> cluster by using new client jar, adding back hdfs jar just like how things 
> work previously
> 
> - Make the change now in 2.8.0, either moving ConfiguredFailoverProxyProvider 
> to client jar or adding dependency between client jar and server jar. There 
> must be some arguments there on which way to fix is better especially 
> ConfiguredFailoverProxyProvider still has some sever side dependencies.
> 
> 
> I would prefer the first option, given:
> 
> - The issue fixing time is unpredictable as there are still discussion on how 
> to fix this issue. Our 2.8.0 release shouldn't be an endless journey which 
> has been deferred several times for more serious issue.
> 
> - We have workaround for this improvement, no regression happens due to this 
> issue. People can still use hdfs jar in old way. The worst case is 
> improvement for HDFS doesn't work in some cases - that shouldn't block the 
> whole release.
> 
> 
> I think we should let vote keep going unless someone have more concerns which 
> I could miss.

getting it out the door with this in the release notes, and a plan for 2.8.1 
would be ideal

> 
> 
> 
> Thanks,
> 
> 
> Junping
> 
> 
> 
> From: Andrew Wang 
> Sent: Tuesday, March 14, 2017 2:50 PM
> To: Junping Du
> Cc: common-...@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
> yarn-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)
> 
> Hi Junping,
> 
> Noticed this possible blocker float by my inbox today. It had an affects but 
> no target version set:
> 
> https://issues.apache.org/jira/browse/HDFS-11431
> 
> Thoughts? Seems like the hadoop-hdfs-client artifact doesn't work right now.
> 
> Best,
> Andrew
> 
> 
> On Tue, Mar 14, 2017 at 1:41 AM, Junping Du 
> > wrote:
> Hi all,
> With several important fixes get merged last week, I've created a new 
> release candidate (RC2) for Apache Hadoop 2.8.0.
> 
> This is the next minor release to follow up 2.7.0 which has been released 
> for more than 1 year. It comprises 2,919 fixes, improvements, and new 
> features. Most of these commits are released for the first time in branch-2.
> 
>  More information about the 2.8.0 release plan can be found here: 
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> 
>  Please note that RC0 and RC1 are not voted public because significant 
> issues are found just after RC tag getting published.
> 
>  The RC is available at: 
> http://home.apache.org/~junping_du/hadoop-2.8.0-RC2
> 
>  The RC tag in git is: release-2.8.0-RC2
> 
>  The maven artifacts are available via 
> repository.apache.org at: 
> https://repository.apache.org/content/repositories/orgapachehadoop-1056
> 
>  Please try the release and vote; the vote will run for the usual 5 days, 
> ending on 03/20/2017 PDT time.
> 
> Thanks,
> 
> Junping
> 


-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-6342) Issues in TimelineClientImpl#TimelineClientImpl

2017-03-15 Thread Jian He (JIRA)
Jian He created YARN-6342:
-

 Summary: Issues in TimelineClientImpl#TimelineClientImpl 
 Key: YARN-6342
 URL: https://issues.apache.org/jira/browse/YARN-6342
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He


Found these with [~rohithsharma] while browsing the code
- In stop: it calls shutdownNow which doens't wait for pending tasks, should it 
use shutdown instead ?
{code}
public void stop() {
  LOG.info("Stopping TimelineClient.");
  executor.shutdownNow();
  try {
executor.awaitTermination(DRAIN_TIME_PERIOD, TimeUnit.MILLISECONDS);
  } catch (InterruptedException e) {
{code }
- In createRunnable:
If any exception happens when publish one entity (ServiceTimelineEvent), the 
thread exists. I think it should try best effort to continue publish the 
timeline events, one failure should not cause all followup events not published.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-6341) Redirected tracking UI of application is not correct if web policy is transformed from HTTP_ONLY to HTTPS_ONLY

2017-03-15 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created YARN-6341:


 Summary: Redirected tracking UI of application is not correct if 
web policy is transformed from HTTP_ONLY to HTTPS_ONLY
 Key: YARN-6341
 URL: https://issues.apache.org/jira/browse/YARN-6341
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yuanbo Liu


Before users enable hadoop https, they submit a MR job. After the job is 
finished and web policy is configured as HTTPS_ONLY, users access as following 
steps:
Resource Manager UI -> Applications -> Tracking UI
then the address is redirected into a http address of job history server 
instead of https address. I think this behavior is related to 
{{WebAppProxyServlet#getTrackingUri}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-6340) TestEventFlow not work as expected

2017-03-15 Thread sandflee (JIRA)
sandflee created YARN-6340:
--

 Summary: TestEventFlow not work as expected
 Key: YARN-6340
 URL: https://issues.apache.org/jira/browse/YARN-6340
 Project: Hadoop YARN
  Issue Type: Test
Reporter: sandflee


see many exceptions in test logs, app/container never running, surprising the 
test could run pass.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Two AMs in one YARN container?

2017-03-15 Thread Sergiy Matusevych
Hi YARN developers,

I have an interesting problem that I think is related to YARN Java client.
I am trying to launch *two* application masters in one container. To be
more specific, I am starting a Spark job on YARN, and launch an Apache REEF
Unmanaged AM from the Spark Driver.

Technically, YARN Resource Manager should not care which process each AM
runs in. However, there is a problem with the YARN Java client
implementation: there is a global UserGroupInformation object that holds
the user credentials of the current RM session. This data structure is
shared by all AMs, and when REEF application tries to register the second
(unmanaged) AM, the client library presents to YARN RM all credentials,
including the security token of the first (managed) AM. YARN rejects such
registration request, throwing InvalidApplicationMasterRequestException
"Application Master is already registered".

I feel like this issue can be resolved by a relatively small update to the
YARN Java client - e.g. by introducing a new variant of the
AMRMClientAsync.registerApplicationMaster() that would take the required
security token (instead of getting it implicitly from the
UserGroupInformation.getCurrentUser().getCredentials() etc.), or having
some sort of RM session class that would wrap all data that is currently
global. I need to think about the elegant API for it.

What do you guys think? I would love to work on this problem and send you a
pull request for the upcoming 2.9 release.

Cheers,
Sergiy.