[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-08 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321955#comment-15321955
 ] 

Joep Rottinghuis commented on YARN-5210:


Yeah, we need to also discuss whether YARN-5170 should go in. I think patch is 
very close, but still needs to be reviewed.
I've been reviewing YARN-5070. I think that we're getting close...

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest

2016-06-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5124:
--
Attachment: YARN-5124.013.patch

[~curino], uploading patch addressing comments..
Hope 13th one is a charm..

> Modify AMRMClient to set the ExecutionType in the ResourceRequest
> -
>
> Key: YARN-5124
> URL: https://issues.apache.org/jira/browse/YARN-5124
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5124.001.patch, YARN-5124.002.patch, 
> YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, 
> YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, 
> YARN-5124.010.patch, YARN-5124.011.patch, YARN-5124.012.patch, 
> YARN-5124.013.patch, YARN-5124_YARN-5180_combined.007.patch, 
> YARN-5124_YARN-5180_combined.008.patch
>
>
> Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} 
> in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} 
> that is sent to the RM 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis reassigned YARN-5170:
--

Assignee: Joep Rottinghuis  (was: Varun Saxena)

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch, 
> YARN-5170-YARN-2928.06.patch, YARN-5170-YARN-2928.07.patch, 
> YARN-5170-YARN-2928.08.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321920#comment-15321920
 ] 

Joep Rottinghuis commented on YARN-5070:


Perhaps instead of

{code}
int limit = getBatchLimit(scannerContext);
{code}
in #nextInternal(List cells, ScannerContext scannerContext)

we should use
{code}
int limit = regionScanner.getBatch();
{code}
See: 
https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/RegionScanner.html#getBatch%28%29

then we can probably do away with the reflection.
Btw. should we also honor regionScanner.getMaxResultSize()?

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch, 
> YARN-5070-YARN-2928.02.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321904#comment-15321904
 ] 

Joep Rottinghuis commented on YARN-5070:


It seems [~apurtell] introduced the Region interface instead of HRegion as part 
of HBASE-12972
[~jonathan.lawlor] introduced the ScannerContext in HBASE-13421 to encapsulate
the scanner limits and progress towards those limits. Perhaps he can give us a 
pointer as to how to best deal with limits after HBase 1.1.

Looking at the changes made to adapt Phoenix copro's to the new Region 
interface it makes me wonder if we should capture Throwables, add some region 
name info and propagate the exception.
We might want to consider catching InterruptedException during some of our 
longer running operations and yield processing through something like:
{code}
   catch (InterruptedException ie) {
  throw new 
InterruptedException(context.getEnvironment().getRegion().getRegionNameAsString(),
 ie);
   }
{code}

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch, 
> YARN-5070-YARN-2928.02.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321859#comment-15321859
 ] 

Joep Rottinghuis commented on YARN-5170:


YARN-5170-YARN-2928.08.patch uploaded, Still waiting for 
https://builds.apache.org/job/PreCommit-YARN-Build/ to kick off.

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch, 
> YARN-5170-YARN-2928.06.patch, YARN-5170-YARN-2928.07.patch, 
> YARN-5170-YARN-2928.08.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321855#comment-15321855
 ] 

Joep Rottinghuis commented on YARN-5070:


I have to set up my environment with HBase 1.1 to fully see this, but is 
circumventing the package private visibility of limits the recommended way to 
do this?
{code}
Field limitsField = ScannerContext.class.getDeclaredField("limits");
{code}
It seems that the "documented way"
{code}
Once a limit has been reached, the scan will stop. The invoker of 
InternalScanner.next(java.util.List) or InternalScanner.next(java.util.List) 
can use the appropriate check*Limit methods to see exactly which limits have 
been reached. Alternatively, checkAnyLimitReached(LimitScope) is provided to 
see if ANY limit was reached 
{code}
indicates that #checkBatchLimit(ScannerContext.LimitScope checkerScope) should 
be used. Unfortunately that also appears to be package private.
Starting to read HBASE-11544 to see if that provides any clues as to 
recommended way to do this.
Perhaps time to ask one of the HBase gurus for a pointer in the right direction?

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch, 
> YARN-5070-YARN-2928.02.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321847#comment-15321847
 ] 

Hadoop QA commented on YARN-5215:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 36s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 8 
new + 361 unchanged - 2 fixed = 369 total (was 363) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 989 unchanged - 0 fixed = 990 total (was 989) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 23s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 15s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 28s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 37s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.RegisterNodeManagerRequestPBImpl.builder;
 locked 92% of time  Unsynchronized access at 

[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321845#comment-15321845
 ] 

Varun Saxena commented on YARN-5210:


We still have some JIRAs' to close before we drop onto trunk. Right ?
There are some documentation JIRAs' like YARN-5052 and YARN-5174. And we need 
to bump up version of HBase too which would involve changes as well.

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Commented] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321841#comment-15321841
 ] 

Joep Rottinghuis commented on YARN-5070:


We should definitely document the version requirements (esp. when coprocessor 
APIs have changed).
Perhaps this can be done in YARN-5174 and probably after YARN-5052 goes in, so 
that we can avoid merge / rebase nightmares.

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch, 
> YARN-5070-YARN-2928.02.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5170:
---
Attachment: YARN-5170-YARN-2928.08.patch

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch, 
> YARN-5170-YARN-2928.06.patch, YARN-5170-YARN-2928.07.patch, 
> YARN-5170-YARN-2928.08.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321832#comment-15321832
 ] 

Hadoop QA commented on YARN-5170:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
35s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
12s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-server in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 40s {color} 
| {color:red} hadoop-yarn-server in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 50s 
{color} | {color:red} hadoop-yarn-server in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 50s {color} 
| {color:red} hadoop-yarn-server in the patch failed with JDK v1.7.0_101. 
{color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 7 new + 0 unchanged - 2 fixed = 7 total (was 2) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s {color} 
| {color:red} hadoop-yarn-server-timelineservice 

[jira] [Updated] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5170:
---
Attachment: YARN-5170-YARN-2928.07.patch

YARN-5170-YARN-2928.07.patch with more hand-to-hand combat applied to 
formatting changes that were not caused by this patch and/or Eclipse formatter 
doesn't seem to agree with.

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch, 
> YARN-5170-YARN-2928.06.patch, YARN-5170-YARN-2928.07.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5202) Dynamic Overcommit of Node Resources - POC

2016-06-08 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321810#comment-15321810
 ] 

Inigo Goiri commented on YARN-5202:
---

As mentioned in YARN-5215, I think this works fits pretty nicely within 
YARN-1011. [~jlowe], [~nroberts], do you guys have any issues on moving this 
work there? We could use most of this patch over there. For sure, all the UI 
stuff in this patch should be added to YARN-1011.

> Dynamic Overcommit of Node Resources - POC
> --
>
> Key: YARN-5202
> URL: https://issues.apache.org/jira/browse/YARN-5202
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-5202.patch
>
>
> This Jira is to present a proof-of-concept implementation (collaboration 
> between [~jlowe] and myself) of a dynamic over-commit implementation in YARN. 
>  The type of over-commit implemented in this jira is similar to but not as 
> full-featured as what's being implemented via YARN-1011. YARN-1011 is where 
> we see ourselves heading but we needed something quick and completely 
> transparent so that we could test it at scale with our varying workloads 
> (mainly MapReduce, Spark, and Tez). Doing so has shed some light on how much 
> additional capacity we can achieve with over-commit approaches, and has 
> fleshed out some of the problems these approaches will face.
> Primary design goals:
> - Avoid changing protocols, application frameworks, or core scheduler logic,  
> - simply adjust individual nodes' available resources based on current node 
> utilization and then let scheduler do what it normally does
> - Over-commit slowly, pull back aggressively - If things are looking good and 
> there is demand, slowly add resource. If memory starts to look over-utilized, 
> aggressively reduce the amount of over-commit.
> - Make sure the nodes protect themselves - i.e. if memory utilization on a 
> node gets too high, preempt something - preferably something from a 
> preemptable queue
> A patch against trunk will be attached shortly.  Some notes on the patch:
> - This feature was originally developed against something akin to 2.7.  Since 
> the patch is mainly to explain the approach, we didn't do any sort of testing 
> against trunk except for basic build and basic unit tests
> - The key pieces of functionality are in {{SchedulerNode}}, 
> {{AbstractYarnScheduler}}, and {{NodeResourceMonitorImpl}}. The remainder of 
> the patch is mainly UI, Config, Metrics, Tests, and some minor code 
> duplication (e.g. to optimize node resource changes we treat an over-commit 
> resource change differently than an updateNodeResource change - i.e. 
> remove_node/add_node is just too expensive for the frequency of over-commit 
> changes)
> - We only over-commit memory at this point. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321809#comment-15321809
 ] 

Inigo Goiri commented on YARN-5215:
---

[~curino], in our cluster we actually surface to the Web UI the utilization. We 
also report negative values for the available resources. However, I think we 
should do a better job exposing this information similarly to what [~jlowe] has 
done in YARN-5202.

I'll start a thread in YARN-1011 about how to expose all this.

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5215:
--
Attachment: YARN-5215.001.patch

Adding configuration switch, boundary checks and unit tests.

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321799#comment-15321799
 ] 

Inigo Goiri commented on YARN-5215:
---

Still fixing the unit test.

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch, YARN-5215.001.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321786#comment-15321786
 ] 

Hadoop QA commented on YARN-5208:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 73 unchanged - 1 fixed = 74 total (was 74) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 25s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 22s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.client.cli.TestLogsCLI |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808967/0005-YARN-5208.patch |
| JIRA Issue | YARN-5208 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 756e27f52db4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1500a0a |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/11929/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 

[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-08 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321744#comment-15321744
 ] 

Li Lu commented on YARN-5210:
-

Hmm, just realized we're closing the branch for now. I'll make sure this commit 
is fine in tomorrow's weekly meeting and then proceed. 

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, 

[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321742#comment-15321742
 ] 

Hadoop QA commented on YARN-5170:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
15s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The 
patch generated 21 new + 0 unchanged - 2 fixed = 21 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-jdk1.8.0_91
 with JDK v1.8.0_91 generated 26 new + 0 unchanged - 0 fixed = 26 total (was 0) 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 48s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 7s 
{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests 

[jira] [Commented] (YARN-5218) Submit initial DNS server approach for review

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321734#comment-15321734
 ] 

Hadoop QA commented on YARN-5218:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
22s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
20s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
21s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 49s {color} 
| {color:red} root generated 1 new + 697 unchanged - 0 fixed = 698 total (was 
697) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} root: The patch generated 0 new + 1 unchanged - 46 
fixed = 1 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 12 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 21s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 93m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
|   | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321718#comment-15321718
 ] 

Hadoop QA commented on YARN-5124:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 52s {color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 33 unchanged - 
0 fixed = 34 total (was 33) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 35s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 23 
new + 130 unchanged - 31 fixed = 153 total (was 161) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 83m 45s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 106m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.cli.TestLogsCLI |
|   | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.TestGetGroups |
| Timed out junit tests | 
org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling |
|   | org.apache.hadoop.yarn.client.cli.TestYarnCLI |
|   | org.apache.hadoop.yarn.client.api.impl.TestAMRMClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestYarnClient |
|   | org.apache.hadoop.yarn.client.api.impl.TestNMClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12809064/YARN-5124.012.patch |
| JIRA Issue | YARN-5124 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d7124eacb363 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Commented] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest

2016-06-08 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321713#comment-15321713
 ] 

Carlo Curino commented on YARN-5124:


[~asuresh] this seems better, however you still have the typedefs in the new 
class, and you have foreach loops on Object that are a bit weird. Would it be 
possible to move into the {{RemoteRequestTable}} the {{addResourceRequest}}, 
{{decResourceRequest}} and {{getMatchingRequests}}. This could simplify further 
the {{AMRMClientImpl}} and delegate to the db-style class all matter of data 
access.

Concretely I propose:
1) get rid of the typedefs
2) move the add/dec/get to the RemoteRequestTable
3) create a new getter to support {{checkLocalityRelaxationConflict}} where the 
matching is done internally in  {{RemoteRequestTable}} and the external method 
operates on a narrowly typed list (built by the getter).

This would fully encapsulate the data storage/access and allow you to evolve 
that independently of the external business-logic classes.

> Modify AMRMClient to set the ExecutionType in the ResourceRequest
> -
>
> Key: YARN-5124
> URL: https://issues.apache.org/jira/browse/YARN-5124
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5124.001.patch, YARN-5124.002.patch, 
> YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, 
> YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, 
> YARN-5124.010.patch, YARN-5124.011.patch, YARN-5124.012.patch, 
> YARN-5124_YARN-5180_combined.007.patch, YARN-5124_YARN-5180_combined.008.patch
>
>
> Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} 
> in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} 
> that is sent to the RM 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4308) ContainersAggregated CPU resource utilization reports negative usage in first few heartbeats

2016-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321709#comment-15321709
 ] 

Hudson commented on YARN-4308:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9934 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9934/])
YARN-4308. ContainersAggregated CPU resource utilization reports 
(naganarasimha_gr: rev 1500a0a3009e453c9f05a93df7a78b4e185eef30)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/WindowsBasedProcessTree.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ProcfsBasedProcessTree.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/MockResourceCalculatorProcessTree.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/MockCPUResourceCalculatorProcessTree.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ResourceCalculatorProcessTree.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/TestContainersMonitorResourceChange.java


> ContainersAggregated CPU resource utilization reports negative usage in first 
> few heartbeats
> 
>
> Key: YARN-4308
> URL: https://issues.apache.org/jira/browse/YARN-4308
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4308.patch, 0002-YARN-4308.patch, 
> 0003-YARN-4308.patch, 0004-YARN-4308.patch, 0005-YARN-4308.patch, 
> 0006-YARN-4308.patch, 0007-YARN-4308.patch, 0008-YARN-4308.patch, 
> 0009-YARN-4308.patch, 0010-YARN-4308.patch
>
>
> NodeManager reports ContainerAggregated CPU resource utilization as -ve value 
> in first few heartbeats cycles. I added a new debug print and received below 
> values from heartbeats.
> {noformat}
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:
>  ContainersResource Utilization : CpuTrackerUsagePercent : -1.0 
> INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl:ContainersResource
>  Utilization :  CpuTrackerUsagePercent : 198.94598
> {noformat}
> Its better we send 0 as CPU usage rather than sending a negative values in 
> heartbeats eventhough its happening in only first few heartbeats.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5170:
---
Attachment: YARN-5170-YARN-2928.06.patch

Attaching YARN-5170-YARN-2928.06.patch which addresses many (most?) of the 
findbugs, checkstyle and javadoc errors.
Need to run it through the washer/dryer cycle again and hold up to light to see 
what (if anything) is left.
Unit test errors don't seem to be related to this patch

> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch, 
> YARN-5170-YARN-2928.06.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321696#comment-15321696
 ] 

Carlo Curino commented on YARN-5215:


[~jlowe] thanks for the comment, very useful for context, and you bring up good 
points on how users "perceive" the cluster. 

[~elgoiri], correct me if I am wrong, but this feature seems ideal to 
"scavenge" a YARN cluster out of otherwise utilized machines. In these 
settings, users should be aware that the cluster is not constant, i.e., the 
effects of the fluctuations are non-trivial and expected. However, I agree with 
you that surfacing them in the UI somehow is important.

All in all, I see a strong connection with over-commit, but this should be 
represented not just as a heavily overcommitted cluster.  I agree with 
[~elgoiri] that it is useful to build this feature in a way that more 
explicitly acknowledges that YARN is not the only thing running on the cluster. 

At the same time, we should try to have a set of configurable that makes 
over/under-commit appear unified and coherent to the admins, and UIs that 
surface them properly to users. [~elgoiri] since you were involved in 
YARN-1011, can you propose a way to do that?
 

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4676) Automatic and Asynchronous Decommissioning Nodes Status Tracking

2016-06-08 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321698#comment-15321698
 ] 

Robert Kanter commented on YARN-4676:
-

Sorry [~danzhi] for disappearing for a bit there.  I got sidetracked with some 
other responsibilities.  Thanks [~vvasudev] for your detailed comments too.  
Here's some additional comments on the latest patch (14):

# The Patch doesn't cleanly apply to the current trunk
#- I did rollback my repo to an older time when the patch does apply cleanly, 
but some tests failed:
{noformat}
testNodeRemovalNormally(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
  Time elapsed: 12.43 sec  <<< FAILURE!
java.lang.AssertionError: Node state is not correct (timedout) 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.waitForState(MockRM.java:727)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testNodeRemovalUtil(TestResourceTrackerService.java:1474)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testNodeRemovalNormally(TestResourceTrackerService.java:1413)

testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
  Time elapsed: 3.184 sec  <<< FAILURE!
java.lang.AssertionError: Node should have been forgotten! 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testNodeRemovalUtil(TestResourceTrackerService.java:1586)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService.testNodeRemovalGracefully(TestResourceTrackerService.java:1421)
{noformat}
# I like [~vvasudev]'s suggestion in an [earlier 
comment|https://issues.apache.org/jira/browse/YARN-4676?focusedCommentId=15272554=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15272554]
 about having the RM tell the NM to do a delayed shutdown.  This keeps the RM 
from having to track anything and we don't have to worry about RM failovers; 
the RM also has less stuff to keep track of.  And I think it would be a lot 
simpler to implement and maintain.  I'd suggest that we do that in this JIRA 
instead of a followup JIRA because otherwise we'll commit a bunch of code here, 
just to throw it out later.  
# In {{HostsFileReader#readXmlFileToMapWithFileInputStream}}, you can replace 
the multiple {{catch}} blocks with a single {{catch}} using this syntax:
{code:java}
catch(IOException|SAXException|ParserConfigurationException e) {
   ...
}
{code}
# I also agree with [~vvasudev] on point 7 about the exit-wait.ms property.  
This seems like a separate feature, so if you still want it, I'd suggest 
creating a separate JIRA with just this.

> Automatic and Asynchronous Decommissioning Nodes Status Tracking
> 
>
> Key: YARN-4676
> URL: https://issues.apache.org/jira/browse/YARN-4676
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Zhi
>Assignee: Daniel Zhi
>  Labels: features
> Attachments: GracefulDecommissionYarnNode.pdf, 
> GracefulDecommissionYarnNode.pdf, YARN-4676.004.patch, YARN-4676.005.patch, 
> YARN-4676.006.patch, YARN-4676.007.patch, YARN-4676.008.patch, 
> YARN-4676.009.patch, YARN-4676.010.patch, YARN-4676.011.patch, 
> YARN-4676.012.patch, YARN-4676.013.patch, YARN-4676.014.patch
>
>
> YARN-4676 implements an automatic, asynchronous and flexible mechanism to 
> graceful decommission
> YARN nodes. After user issues the refreshNodes request, ResourceManager 
> automatically evaluates
> status of all affected nodes to kicks out decommission or recommission 
> actions. RM asynchronously
> tracks container and application status related to DECOMMISSIONING nodes to 
> decommission the
> nodes immediately after there are ready to be decommissioned. Decommissioning 
> timeout at individual
> nodes granularity is supported and could be dynamically updated. The 
> mechanism naturally supports multiple
> independent graceful decommissioning “sessions” where each one involves 
> different sets of nodes with
> different timeout settings. Such support is ideal and necessary for graceful 
> decommission request issued
> by external cluster management software instead of human.
> DecommissioningNodeWatcher inside ResourceTrackingService tracks 
> DECOMMISSIONING nodes status automatically and asynchronously 

[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321672#comment-15321672
 ] 

Inigo Goiri commented on YARN-5215:
---

Yes, I realized that the original title didn't mention external load. Fixed 
now, sorry about that; I think it's more clear. Feel free to tweak the 
description more.

As you mention, we could achieve this by tweaking the "guaranteed" size. 
However, I think that having the explicit concept regarding external 
utilization makes it simpler and it's compatible with the overcommit approach 
(both can be enabled/disabled independently). In addition, the concept of node 
utilization is not planned to be used in YARN-1011 for now.

I'm going to post during the next hour a patch with:
* Unit tests
* Conf switches
* Boundary checks

Then, I agree that we need to report this properly to the user. I was thinking 
on exposing the {{getExternalUtilization()}} or the updated 
{{getUnallocated()}} through the Web UI, etc. If we decide this feature should 
go ahead, I would add here or in a new JIRA.

To summarize the issues to discuss/finalize are:
* Decide if this should be a separate feature or within overcommit
* Add unit tests
* Add conf switches
* Add boundary checks
* Interface to expose this information

Regarding YARN-5202 vs YARN-1011, it looks to me like there's a lot of overlap 
between them. I think it'd be better to port most of YARN-5202 into YARN-1011. 
We probably should move this discussion into one of them.

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5218) Submit initial DNS server approach for review

2016-06-08 Thread Jonathan Maron (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Maron updated YARN-5218:
-
Attachment: YARN-5218-YARN-4757.002.patch

> Submit initial DNS server approach for review
> -
>
> Key: YARN-5218
> URL: https://issues.apache.org/jira/browse/YARN-5218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
> Attachments: YARN-5218-YARN-4757.001.patch, 
> YARN-5218-YARN-4757.002.patch
>
>
> Submit the initial dnsjava based solution for review



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321648#comment-15321648
 ] 

Jason Lowe commented on YARN-5215:
--

Ah, so the headline was a bit misleading.  Most people saw that and thought 
this feature is going to schedule more containers on the server, but this is 
essentially the opposite of YARN-1011 and YARN-5202.  It's scaling down the 
nodes from the original size as the node utilization increases rather than 
scaling it up from the original size as the node utilization decreases.  
Instead of overcommit, this feature is "undercommit."  ;)

I'm OK with the idea of the feature in general, and I don't think it will be 
horribly incompatible.  In fact I think features like YARN-1011 / YARN-5202 
could emulate this behavior by tuning the "guaranteed" node size for YARN very 
low but allowing it to drastically overcommit up to the original node 
capability.  In other words rather than starting nodes big and scaling down, we 
start nodes small and scale up when we can.  The YARN-5202 patch is already 
dynamically scaling the node size based on the reported total node utilization, 
so it will respond to increasing external load similarly.  The only thing 
missing there is it won't go below the original node size no matter how bad the 
utilization gets, so either that would need to be changed or as I mentioned the 
users tune the feature differently to get this behavior.

Any thoughts on whether this is better implemented as an overcommit setup 
rather than an undercommit setup?  It may be confusing if YARN has two separate 
features doing essentially the same thing from opposite viewpoints.  Also the 
guaranteed containers from YARN-1011 are going to be difficult to guarantee if 
this feature can preempt them based on external node load.  Arguably the user 
should configure a guaranteed YARN capacity on these nodes and then YARN can 
opportunistically use the remaining node's capacity when it appears available.

If we do go with this approach, it seems like this patch is quite a ways off.  
Besides unit tests, conf switches, and boundary condition hardening, I think it 
will be confusing to users and admins to monitor it.  Simply adjusting the 
SchedulerNode will semantically accomplish the desired task as far as 
scheduling containers goes, but the UI, queue and cluster metrics will not 
reflect the reality of the scheduler.  For example if most of the nodes have 
significantly been scaled back due to external load, the scheduler UI will show 
a well-underutilized cluster when in reality it may be completely full and 
can't schedule another container.  That's going to be very confusing to users.  
And there are no metrics showing how much has been scaled back -- I think the 
user would have to go to the scheduler nodes page, sum the node capabilities 
themselves, notice it's significantly lower than the reported cluster total, 
and assume it must be this feature causing that anomaly.  I would think 
minimally the cluster size should be changing (along with the queue portions of 
that size) so the amount of utilization of the YARN cluster and scheduler UI is 
accurate.  That still leaves the user to divine why their cluster size is 
floating around over time when they aren't adding or removing nodes, which is 
why we may need another metric showing how much has been "stolen" by external 
node load outside of YARN.  Maybe we still have an overcommit metric but it 
goes negative when we've had original capacity removed by external factors?  
Not sure how best to represent it without over-cluttering the UI with a bunch 
of feature-specific fields.

This was addressed in the YARN-5202 patch by adjusting the queue and cluster 
metrics as we adjust the scheduler node, and there were also metrics and UI 
fields added to show the amount of overcommit.  Note that in the YARN-5202 
patch we added a fast-path to adjusting the node's size in the scheduler.  The 
typical remove-old-node-add-new-node form of updating is quite expensive since 
it computes unnecessary things like node label diffs, etc. and updates the 
metrics twice, once for the removal and once for the add.  Since this kind of 
feature is going to be adjusting node sizes all the time, a node adjustment 
needs to be as cheap as possible while still keeping the UI and metrics up to 
date.


> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers 

[jira] [Commented] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321631#comment-15321631
 ] 

Hadoop QA commented on YARN-1942:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 36 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 43s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 9s 
{color} | {color:red} root: The patch generated 110 new + 2991 unchanged - 33 
fixed = 3101 total (was 3024) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 12 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 1s 
{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 11m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 11m 58s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 4s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 58s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 46s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s 
{color} | {color:green} hadoop-yarn-server-timeline-pluginstorage in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 32s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 18s {color} 
| {color:red} 

[jira] [Commented] (YARN-5218) Submit initial DNS server approach for review

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321616#comment-15321616
 ] 

Hadoop QA commented on YARN-5218:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
19s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
23s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
23s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 33s {color} 
| {color:red} root generated 1 new + 697 unchanged - 0 fixed = 698 total (was 
697) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
21s {color} | {color:green} root: The patch generated 0 new + 1 unchanged - 46 
fixed = 1 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 12 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 39s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-5124) Modify AMRMClient to set the ExecutionType in the ResourceRequest

2016-06-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5124:
--
Attachment: YARN-5124.012.patch

Updating patch based on [~curino]'s suggestions.
Introduced a new class {{RemoteRequestsTable}} that encapsulates all the 
add/remove/update operations of the hashtable that tracks all the outstanding 
ResourceRequests.

> Modify AMRMClient to set the ExecutionType in the ResourceRequest
> -
>
> Key: YARN-5124
> URL: https://issues.apache.org/jira/browse/YARN-5124
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5124.001.patch, YARN-5124.002.patch, 
> YARN-5124.003.patch, YARN-5124.004.patch, YARN-5124.005.patch, 
> YARN-5124.006.patch, YARN-5124.008.patch, YARN-5124.009.patch, 
> YARN-5124.010.patch, YARN-5124.011.patch, YARN-5124.012.patch, 
> YARN-5124_YARN-5180_combined.007.patch, YARN-5124_YARN-5180_combined.008.patch
>
>
> Currently the {{ContainerRequest}} allows the AM to set the {{ExecutionType}} 
> in the AMRMClient, but it is not being set in the actual {{ResourceRequest}} 
> that is sent to the RM 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5070:
-
Attachment: YARN-5070-YARN-2928.02.patch

Uploading v2 that addresses the two checkstyle warnings

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch, 
> YARN-5070-YARN-2928.02.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5220) Scheduling of OPPORTUNISTIC containers through YARN RM

2016-06-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5220:
-
Description: 
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.

In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
OPPORTUNISTIC containers to be scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
This way, users can use OPPORTUNISTIC containers to improve the cluster's 
utilization, without the need to enable distributed scheduling.

This JIRA is also related to YARN-1011 that introduces the over-commitment of 
resources, scheduling additional OPPORTUNISTIC containers to the NMs based on 
the currently used resources and not based only on the allocated resources.

  was:
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.

In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
OPPORTUNISTIC containers to be scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
This way, users can use OPPORTUNISTIC containers to improve the cluster's 
utilization, without needing to enable distributed scheduling.

This JIRA is also related to YARN-1011 that introduces the over-commitment of 
resources, scheduling additional OPPORTUNISTIC containers to the NMs based on 
the currently used resources and not based only on the allocated resources.


> Scheduling of OPPORTUNISTIC containers through YARN RM
> --
>
> Key: YARN-5220
> URL: https://issues.apache.org/jira/browse/YARN-5220
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along 
> with the existing GUARANTEED containers of YARN.
> OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
> are executed as long as there are available resources at the NM. Moreover, 
> they are of lower priority than the GUARANTEED containers, that is, they can 
> be preempted for a GUARANTEED container to start its execution.
> In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
> OPPORTUNISTIC containers to be scheduled exclusively by distributed 
> schedulers.
> In this JIRA, we are proposing to extend the centralized YARN RM in order to 
> enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
> This way, users can use OPPORTUNISTIC containers to improve the cluster's 
> utilization, without the need to enable distributed scheduling.
> This JIRA is also related to YARN-1011 that introduces the over-commitment of 
> resources, scheduling additional OPPORTUNISTIC containers to the NMs based on 
> the currently used resources and not based only on the allocated resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5220) Scheduling of OPPORTUNISTIC containers through YARN RM

2016-06-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5220:
-
Description: 
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.

In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
OPPORTUNISTIC containers to be scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
This way, users can use OPPORTUNISTIC containers to improve the cluster's 
utilization, without needing to enable distributed scheduling.

This JIRA is also related to YARN-1011 that introduces the over-commitment of 
resources, scheduling additional OPPORTUNISTIC containers to the NMs based on 
the currently used resources and not based only on the allocated resources.

  was:
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.

In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
OPPORTUNISTIC containers to be scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
This way, users can use OPPORTUNISTIC containers to improve the cluster's 
utilization, without needing to enable distributed scheduling.


> Scheduling of OPPORTUNISTIC containers through YARN RM
> --
>
> Key: YARN-5220
> URL: https://issues.apache.org/jira/browse/YARN-5220
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along 
> with the existing GUARANTEED containers of YARN.
> OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
> are executed as long as there are available resources at the NM. Moreover, 
> they are of lower priority than the GUARANTEED containers, that is, they can 
> be preempted for a GUARANTEED container to start its execution.
> In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
> OPPORTUNISTIC containers to be scheduled exclusively by distributed 
> schedulers.
> In this JIRA, we are proposing to extend the centralized YARN RM in order to 
> enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
> This way, users can use OPPORTUNISTIC containers to improve the cluster's 
> utilization, without needing to enable distributed scheduling.
> This JIRA is also related to YARN-1011 that introduces the over-commitment of 
> resources, scheduling additional OPPORTUNISTIC containers to the NMs based on 
> the currently used resources and not based only on the allocated resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5220) Scheduling of OPPORTUNISTIC containers through YARN RM

2016-06-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5220:
-
Description: 
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.

In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
OPPORTUNISTIC containers to be scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
This way, users can use OPPORTUNISTIC containers to improve the cluster's 
utilization, without needing to enable distributed scheduling.

  was:
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.

In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
OPPORTUNISTIC containers to be scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.


> Scheduling of OPPORTUNISTIC containers through YARN RM
> --
>
> Key: YARN-5220
> URL: https://issues.apache.org/jira/browse/YARN-5220
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along 
> with the existing GUARANTEED containers of YARN.
> OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
> are executed as long as there are available resources at the NM. Moreover, 
> they are of lower priority than the GUARANTEED containers, that is, they can 
> be preempted for a GUARANTEED container to start its execution.
> In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
> OPPORTUNISTIC containers to be scheduled exclusively by distributed 
> schedulers.
> In this JIRA, we are proposing to extend the centralized YARN RM in order to 
> enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.
> This way, users can use OPPORTUNISTIC containers to improve the cluster's 
> utilization, without needing to enable distributed scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5220) Scheduling of OPPORTUNISTIC containers through YARN RM

2016-06-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5220:
-
Description: 
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.

In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
OPPORTUNISTIC containers to be scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.

  was:
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.
In YARN-2877, we enabled distributed scheduling in YARN, and OPPORTUNISTIC 
containers are scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.


> Scheduling of OPPORTUNISTIC containers through YARN RM
> --
>
> Key: YARN-5220
> URL: https://issues.apache.org/jira/browse/YARN-5220
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along 
> with the existing GUARANTEED containers of YARN.
> OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
> are executed as long as there are available resources at the NM. Moreover, 
> they are of lower priority than the GUARANTEED containers, that is, they can 
> be preempted for a GUARANTEED container to start its execution.
> In YARN-2877, we introduced distributed scheduling in YARN, and enabled 
> OPPORTUNISTIC containers to be scheduled exclusively by distributed 
> schedulers.
> In this JIRA, we are proposing to extend the centralized YARN RM in order to 
> enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5220) Scheduling of OPPORTUNISTIC containers through YARN RM

2016-06-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5220:
-
Description: 
In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along with 
the existing GUARANTEED containers of YARN.
OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
are executed as long as there are available resources at the NM. Moreover, they 
are of lower priority than the GUARANTEED containers, that is, they can be 
preempted for a GUARANTEED container to start its execution.
In YARN-2877, we enabled distributed scheduling in YARN, and OPPORTUNISTIC 
containers are scheduled exclusively by distributed schedulers.

In this JIRA, we are proposing to extend the centralized YARN RM in order to 
enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.

> Scheduling of OPPORTUNISTIC containers through YARN RM
> --
>
> Key: YARN-5220
> URL: https://issues.apache.org/jira/browse/YARN-5220
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> In YARN-2882, we introduced the notion of OPPORTUNISTIC containers, along 
> with the existing GUARANTEED containers of YARN.
> OPPORTUNISTIC containers are allowed to be queued at the NMs (YARN-2883), and 
> are executed as long as there are available resources at the NM. Moreover, 
> they are of lower priority than the GUARANTEED containers, that is, they can 
> be preempted for a GUARANTEED container to start its execution.
> In YARN-2877, we enabled distributed scheduling in YARN, and OPPORTUNISTIC 
> containers are scheduled exclusively by distributed schedulers.
> In this JIRA, we are proposing to extend the centralized YARN RM in order to 
> enable the scheduling of OPPORTUNISTIC containers in a centralized fashion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2572) Enhancements to the ReservationSytem/Planner

2016-06-08 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-2572:
---
Labels: rayon reservation  (was: )

> Enhancements to the ReservationSytem/Planner
> 
>
> Key: YARN-2572
> URL: https://issues.apache.org/jira/browse/YARN-2572
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler, fairscheduler, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>  Labels: rayon, reservation
>
> YARN-1051 introduces a ReservationSytem/Planner that enables the YARN RM to 
> handle time expilicitly, i.e. users can now "reserve" capacity ahead of time 
> which is predictably allocated to them. This is an umbrella JIRA to enhance 
> the reservation system by integrating with FairScheduler, RM failover 
> mechanism, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321562#comment-15321562
 ] 

Carlo Curino commented on YARN-5215:


Bar proper synching with YARN-1011 and YARN-5202 efforts and polishing the 
patch to include tests and conf switches, 
I am very supportive of this effort. It seems a rather simple change that can 
deal with a broad set of issues when YARN 
is not the only thing running on a set of machines.  The fact that you have 
been running this for a while is also reassuring. 
Were those prod clusters? Scale? 

Please provide a cleaned-up version of the patch, and comment on [~jlowe] 
comment.

[~jlowe] would you be ok with this going in? Can you build upon it for the work 
you are doing or is it horribly incompatible?  

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321560#comment-15321560
 ] 

Hadoop QA commented on YARN-5070:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 8m 52s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 53s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
49s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 33s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 7s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
23s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} YARN-2928 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
27s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} YARN-2928 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} YARN-2928 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
51s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 7s 
{color} | {color:red} root: The patch generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 44s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed 
with JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Created] (YARN-5220) Scheduling of OPPORTUNISTIC containers through YARN RM

2016-06-08 Thread Konstantinos Karanasos (JIRA)
Konstantinos Karanasos created YARN-5220:


 Summary: Scheduling of OPPORTUNISTIC containers through YARN RM
 Key: YARN-5220
 URL: https://issues.apache.org/jira/browse/YARN-5220
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: resourcemanager
Reporter: Konstantinos Karanasos
Assignee: Konstantinos Karanasos






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5206) RegistrySecurity includes id:pass in exception text if considered invalid

2016-06-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321542#comment-15321542
 ] 

Steve Loughran commented on YARN-5206:
--

thanks for doing the commit

> RegistrySecurity includes id:pass in exception text if considered invalid
> -
>
> Key: YARN-5206
> URL: https://issues.apache.org/jira/browse/YARN-5206
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client, security
>Affects Versions: 2.7.2, 2.6.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: YARN-5206-001.patch
>
>
> {{RegistrySecurity.digest(String)}} throws an IOE if the "digest" auth isn't 
> considered valid...this means that info may into logs.
> fix is trivial



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2016-06-08 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-5219:
--
Description: 
Today, a container fails if certain files fail to localize. However, if certain 
env vars fail to get setup properly either due to bugs in the yarn application 
or misconfiguration, the actual process launch still gets triggered. This 
results in either confusing error messages if the process fails to launch or 
worse yet the process launches but then starts behaving wrongly if the env var 
is used to control some behavioral aspects. 

In this scenario, the issue was reproduced by trying to do export 
abc="$\{foo.bar}" which is invalid as var names cannot contain "." in bash. 

  was:
Today, a container fails if certain files fail to localize. However, if certain 
env vars fail to get setup properly either due to bugs in the yarn application 
or misconfiguration, the actual process launch still gets triggered. This 
results in either confusing error messages if the process fails to launch or 
worse yet the process launches but then starts behaving wrongly if the env var 
is used to control some behavioral aspects. 

In this scenario, the issue was reproduced by trying to do export 
abc="$\X{foo.bar}" which is invalid as var names cannot contain "." in bash. 


> When an export var command fails in launch_container.sh, the full container 
> launch should fail
> --
>
> Key: YARN-5219
> URL: https://issues.apache.org/jira/browse/YARN-5219
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Hitesh Shah
>
> Today, a container fails if certain files fail to localize. However, if 
> certain env vars fail to get setup properly either due to bugs in the yarn 
> application or misconfiguration, the actual process launch still gets 
> triggered. This results in either confusing error messages if the process 
> fails to launch or worse yet the process launches but then starts behaving 
> wrongly if the env var is used to control some behavioral aspects. 
> In this scenario, the issue was reproduced by trying to do export 
> abc="$\{foo.bar}" which is invalid as var names cannot contain "." in bash. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2016-06-08 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated YARN-5219:
--
Description: 
Today, a container fails if certain files fail to localize. However, if certain 
env vars fail to get setup properly either due to bugs in the yarn application 
or misconfiguration, the actual process launch still gets triggered. This 
results in either confusing error messages if the process fails to launch or 
worse yet the process launches but then starts behaving wrongly if the env var 
is used to control some behavioral aspects. 

In this scenario, the issue was reproduced by trying to do export 
abc="$\X{foo.bar}" which is invalid as var names cannot contain "." in bash. 

  was:
Today, a container fails if certain files fail to localize. However, if certain 
env vars fail to get setup properly either due to bugs in the yarn application 
or misconfiguration, the actual process launch still gets triggered. This 
results in either confusing error messages if the process fails to launch or 
worse yet the process launches but then starts behaving wrongly if the env var 
is used to control some behavioral aspects. 

In this scenario, the issue was reproduced by trying to do export 
abc="${foo.bar}" which is invalid as var names cannot contain "." in bash. 


> When an export var command fails in launch_container.sh, the full container 
> launch should fail
> --
>
> Key: YARN-5219
> URL: https://issues.apache.org/jira/browse/YARN-5219
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Hitesh Shah
>
> Today, a container fails if certain files fail to localize. However, if 
> certain env vars fail to get setup properly either due to bugs in the yarn 
> application or misconfiguration, the actual process launch still gets 
> triggered. This results in either confusing error messages if the process 
> fails to launch or worse yet the process launches but then starts behaving 
> wrongly if the env var is used to control some behavioral aspects. 
> In this scenario, the issue was reproduced by trying to do export 
> abc="$\X{foo.bar}" which is invalid as var names cannot contain "." in bash. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5219) When an export var command fails in launch_container.sh, the full container launch should fail

2016-06-08 Thread Hitesh Shah (JIRA)
Hitesh Shah created YARN-5219:
-

 Summary: When an export var command fails in launch_container.sh, 
the full container launch should fail
 Key: YARN-5219
 URL: https://issues.apache.org/jira/browse/YARN-5219
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Hitesh Shah


Today, a container fails if certain files fail to localize. However, if certain 
env vars fail to get setup properly either due to bugs in the yarn application 
or misconfiguration, the actual process launch still gets triggered. This 
results in either confusing error messages if the process fails to launch or 
worse yet the process launches but then starts behaving wrongly if the env var 
is used to control some behavioral aspects. 

In this scenario, the issue was reproduced by trying to do export 
abc="${foo.bar}" which is invalid as var names cannot contain "." in bash. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5215:
--
Description: Currently YARN runs containers in the servers assuming that 
they own all the resources. The proposal is to use the utilization information 
in the node and the containers to estimate how much is consumed by external 
processes and schedule based on this estimation.  (was: Currently YARN runs 
containers in the servers assuming that they own all the resources. The 
proposal is to use the utilization information in the node and the containers 
to estimate how much is actually available in the NMs.)

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is consumed by external processes and 
> schedule based on this estimation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5215) Scheduling containers based on external load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5215:
--
Summary: Scheduling containers based on external load in the servers  (was: 
Scheduling containers based on load in the servers)

> Scheduling containers based on external load in the servers
> ---
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5218) Submit initial DNS server approach for review

2016-06-08 Thread Jonathan Maron (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Maron updated YARN-5218:
-
Attachment: YARN-5218-YARN-4757.001.patch

latest implementation

> Submit initial DNS server approach for review
> -
>
> Key: YARN-5218
> URL: https://issues.apache.org/jira/browse/YARN-5218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Jonathan Maron
>Assignee: Jonathan Maron
> Attachments: YARN-5218-YARN-4757.001.patch
>
>
> Submit the initial dnsjava based solution for review



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5218) Submit initial DNS server approach for review

2016-06-08 Thread Jonathan Maron (JIRA)
Jonathan Maron created YARN-5218:


 Summary: Submit initial DNS server approach for review
 Key: YARN-5218
 URL: https://issues.apache.org/jira/browse/YARN-5218
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Reporter: Jonathan Maron
Assignee: Jonathan Maron


Submit the initial dnsjava based solution for review



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-06-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5216:
--
Description: 
Currently, the default action taken by the QueuingContainerManager, introduced 
in YARN-2883, when a GUARANTEED Container is scheduled on an NM with 
OPPORTUNISTIC containers using up resources, is to KILL the running 
OPPORTUNISTIC containers.

This JIRA proposes to expose a configurable hook to allow the NM to take a 
different action.

  was:
Currently, the default action taken by the QueuingContainerManager, introduce 
in YARN-2883, when a GUARANTEED Container is scheduled on an NM with 
OPPORTUNISTIC containers using up resources, is to KILL the running 
OPPORTUNISTIC containers.

This JIRA proposes to expose a configurable hook to allow the NM to take a 
different action.


> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently, the default action taken by the QueuingContainerManager, 
> introduced in YARN-2883, when a GUARANTEED Container is scheduled on an NM 
> with OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5070:
-
Attachment: YARN-5070-YARN-2928.01.patch

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5070:
-
Attachment: (was: YARN-5070-YARN-2928.01.patch)

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers running on the NM

2016-06-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5216:
--
Summary: Expose configurable preemption policy for OPPORTUNISTIC containers 
running on the NM  (was: Expose configurable preemption policy for 
OPPORTUNISTIC containers runnig on the NM)

> Expose configurable preemption policy for OPPORTUNISTIC containers running on 
> the NM
> 
>
> Key: YARN-5216
> URL: https://issues.apache.org/jira/browse/YARN-5216
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Currently, the default action taken by the QueuingContainerManager, introduce 
> in YARN-2883, when a GUARANTEED Container is scheduled on an NM with 
> OPPORTUNISTIC containers using up resources, is to KILL the running 
> OPPORTUNISTIC containers.
> This JIRA proposes to expose a configurable hook to allow the NM to take a 
> different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5070:
-
Attachment: YARN-5070-YARN-2928.01.patch

Uploading patch v1. This upgrades hbase from 1.0.1 to 1.1.3 and phoenix from 
4.5.0-SNAPSHOT to 4.7.0-HBase-1.1

All the unit tests are passing, I still need to run the pseudo setup and check 
further. 

> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5070-YARN-2928.01.patch
>
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5153) [YARN-3368] Add a toggle to switch timeline view / table view for containers information inside application-attempt page

2016-06-08 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321397#comment-15321397
 ] 

Wangda Tan commented on YARN-5153:
--

Attached screenshot-5 to show the latest changes.

> [YARN-3368] Add a toggle to switch timeline view / table view for containers 
> information inside application-attempt page
> 
>
> Key: YARN-5153
> URL: https://issues.apache.org/jira/browse/YARN-5153
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5153-YARN-3368.1.patch, 
> YARN-5153.preliminary.1.patch, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, screenshot-4.png, screenshot-5.png
>
>
> Now we only support timeline view for containers on app-attempt page, it will 
> be also very useful to show table of containers in some cases. For example, 
> user can short containers based on priority, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5153) [YARN-3368] Add a toggle to switch timeline view / table view for containers information inside application-attempt page

2016-06-08 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5153:
-
Attachment: screenshot-5.png

> [YARN-3368] Add a toggle to switch timeline view / table view for containers 
> information inside application-attempt page
> 
>
> Key: YARN-5153
> URL: https://issues.apache.org/jira/browse/YARN-5153
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5153-YARN-3368.1.patch, 
> YARN-5153.preliminary.1.patch, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, screenshot-4.png, screenshot-5.png
>
>
> Now we only support timeline view for containers on app-attempt page, it will 
> be also very useful to show table of containers in some cases. For example, 
> user can short containers based on priority, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5153) [YARN-3368] Add a toggle to switch timeline view / table view for containers information inside application-attempt page

2016-06-08 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5153:
-
Attachment: YARN-5153-YARN-3368.1.patch

Attached a new patch, instead of using toggle to switch views, the new patch 
simply shows table-view/timeline-view together.

User should be able to get more comprehensive information about containers.

[~sunilg], please let me know your thoughts.

> [YARN-3368] Add a toggle to switch timeline view / table view for containers 
> information inside application-attempt page
> 
>
> Key: YARN-5153
> URL: https://issues.apache.org/jira/browse/YARN-5153
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5153-YARN-3368.1.patch, 
> YARN-5153.preliminary.1.patch, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, screenshot-4.png
>
>
> Now we only support timeline view for containers on app-attempt page, it will 
> be also very useful to show table of containers in some cases. For example, 
> user can short containers based on priority, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-06-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321386#comment-15321386
 ] 

Varun Saxena commented on YARN-2962:


Have rebased the patch and fixed comments given by Daniel. Haven't tested it in 
my setup after rebase but tests pass and should be good for review.

As per current patch, there can be a race if 2 RMs' become active at same time. 
Do we need to handle it because I have never come across this scenario.
Basically when we store an app, we first create parent app node as per split(if 
it does not exist) and then create the child znode which stores app data. These 
2 operations are not carried out within the same fencing though.
And when we remove an app, we first delete the app znode containing app data, 
then get children for parent app node to check if there are no more child 
znodes, and if there are no more child znodes, delete the parent app node. 
These 3 operations are not done in a single fencing either which can lead to a 
race between creating and deleting parent app node.

We can however get rid of this potential race by doing these operations within 
a single fencing by creating fencing lock nodes explicitly first and then 
carrying out these operations one by one(cant be done in single transaction as 
we have to check how many children exist for the parent app node). Thoughts ?

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.01.patch, YARN-2962.04.patch, 
> YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5217) Close FileInputStream in NMWebServices#getLogs in branch-2.8

2016-06-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5217:

Attachment: YARN-5217.branch-2.8.patch

> Close FileInputStream in NMWebServices#getLogs in branch-2.8
> 
>
> Key: YARN-5217
> URL: https://issues.apache.org/jira/browse/YARN-5217
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5217.branch-2.8.patch
>
>
> In https://issues.apache.org/jira/browse/YARN-5199, we close LogReader in in 
> AHSWebServices#getStreamingOutput and FileInputStream in 
> NMWebServices#getLogs. We should do the same thing in branch-2.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321355#comment-15321355
 ] 

Inigo Goiri commented on YARN-5215:
---

This could be a subtask in YARN-1011. However, I thought it was isolated enough 
to make it independent. I think YARN-5202 also goes into the same overcommit 
direction. To clarify the differences to YARN-1011: here we propose to estimate 
the utilization from external processes (e.g., HDFS DataNode or Impala daemons) 
and schedule based on that, so I think is orthogonal to the overcommit work. 
Happy to move it to a subtask if you guys think it makes more sense there.

Regarding the questions from [~curino]:
# This is just at scheduling time and with the proposed approach we will 
allocate the same or less as the current schedulers; so it's conservative in 
terms of scheduling and the only issue is it wouldn't use as many resources as 
it could. The estimation is: {{externalUtilization = nodeUtilization - 
containersUtilization}} and given that the container and node utilization are 
captured at different intervals, we could have containersUtilization > 
nodeUtilization; I think adding a check for negative values should be enough. 
In any case, I don't see issues with enforcing the resources as the only thing 
we do is estimating the external utilization.
# This patch only prevents scheduling, further discussion in #4.
# As I mentioned in the first paragraph, I think this is orthogonal to 
overcommit, we can run this without overcommiting resources and just prevent 
impact on the external load. If overcommitting is enabled, we can still play 
this trick.
# For the functionality described in this patch, this is it; we can open other 
tasks to do preemption at (1) scheduler level and (2) NM level. We could add 
the first one to this patch if needed.
# We have variations of this implemented and running in our cluster for the 
last year. In our scenario, we have other latency sensitive load running in 
those machines, and we want to guarantee they get as many resources as they 
need. Regarding unit testing, I can try to play with the MiniYarnCluster to 
fake external load; it shouldn't be too bad to extend 
{{TestMiniYarnClusterNodeUtilization}}.

(I tried to use bq to reply but it got messy, I hope this is comprehensive.)

Just to highlight my point on the major discussion, I think this can be a 
subtask of YARN-1011 but it's orthogonal to overcommit.

> Scheduling containers based on load in the servers
> --
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5217) Close FileInputStream in NMWebServices#getLogs in branch-2.8

2016-06-08 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-5217:
---

 Summary: Close FileInputStream in NMWebServices#getLogs in 
branch-2.8
 Key: YARN-5217
 URL: https://issues.apache.org/jira/browse/YARN-5217
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong


In https://issues.apache.org/jira/browse/YARN-5199, we close LogReader in in 
AHSWebServices#getStreamingOutput and FileInputStream in NMWebServices#getLogs. 
We should do the same thing in branch-2.8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5214) Pending on synchronized method DirectoryCollection#checkDirs can hang NM's NodeStatusUpdater

2016-06-08 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321342#comment-15321342
 ] 

Nathan Roberts commented on YARN-5214:
--

I'm not suggesting this change shouldn't be made but keep in mind that if the 
NM is having trouble performing this type of action within the timeout (10 
minutes or so), then the node is not very healthy and probably shouldn't be 
given anything more to run until the situation improves. It's going to have 
trouble doing all sorts of other things as well so having it look unhealthy in 
some fashion isn't all bad. If we somehow keep heartbeats completely free of 
I/O, then the RM will keep assigning containers that will likely run into 
exactly the same slowness. 

We used to see similar issues that we resolved by switching to the deadline I/O 
scheduler (assuming linux). See 
https://issues.apache.org/jira/browse/HDFS-9239?focusedCommentId=15218302=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15218302


> Pending on synchronized method DirectoryCollection#checkDirs can hang NM's 
> NodeStatusUpdater
> 
>
> Key: YARN-5214
> URL: https://issues.apache.org/jira/browse/YARN-5214
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
>
> In one cluster, we notice NM's heartbeat to RM is suddenly stopped and wait a 
> while and marked LOST by RM. From the log, the NM daemon is still running, 
> but jstack hints NM's NodeStatusUpdater thread get blocked:
> 1.  Node Status Updater thread get blocked by 0x8065eae8 
> {noformat}
> "Node Status Updater" #191 prio=5 os_prio=0 tid=0x7f0354194000 nid=0x26fa 
> waiting for monitor entry [0x7f035945a000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.getFailedDirs(DirectoryCollection.java:170)
> - waiting to lock <0x8065eae8> (a 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getDisksHealthReport(LocalDirsHandlerService.java:287)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport(NodeHealthCheckerService.java:58)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.getNodeStatus(NodeStatusUpdaterImpl.java:389)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.access$300(NodeStatusUpdaterImpl.java:83)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$1.run(NodeStatusUpdaterImpl.java:643)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> 2. The actual holder of this lock is DiskHealthMonitor:
> {noformat}
> "DiskHealthMonitor-Timer" #132 daemon prio=5 os_prio=0 tid=0x7f0397393000 
> nid=0x26bd runnable [0x7f035e511000]
>java.lang.Thread.State: RUNNABLE
> at java.io.UnixFileSystem.createDirectory(Native Method)
> at java.io.File.mkdir(File.java:1316)
> at 
> org.apache.hadoop.util.DiskChecker.mkdirsWithExistsCheck(DiskChecker.java:67)
> at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:104)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.verifyDirUsingMkdir(DirectoryCollection.java:340)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.testDirs(DirectoryCollection.java:312)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:231)
> - locked <0x8065eae8> (a 
> org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:389)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$400(LocalDirsHandlerService.java:50)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService$MonitoringTimerTask.run(LocalDirsHandlerService.java:122)
> at java.util.TimerThread.mainLoop(Timer.java:555)
> at java.util.TimerThread.run(Timer.java:505)
> {noformat}
> This disk operation could take longer time than expectation especially in 
> high IO throughput case and we should have fine-grained lock for related 
> operations here. 
> The same issue on HDFS get raised and fixed in HDFS-7489, and we probably 
> should have similar fix here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org

[jira] [Commented] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321318#comment-15321318
 ] 

Jason Lowe commented on YARN-5215:
--

Besides YARN-1011 this is also very similar to the dynamic overcommit prototype 
we're running, see YARN-5202.  It simply has the RM dynamically size the nodes 
based on the node's reported utilization.  Nodes will protect themselves from 
too much overcommit by preempting containers when utilization gets too high.

> Scheduling containers based on load in the servers
> --
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5210) NPE in Distributed Shell while publishing DS_CONTAINER_START event and other miscellaneous issues

2016-06-08 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321314#comment-15321314
 ] 

Li Lu commented on YARN-5210:
-

Thanks [~varun_saxena]! Patch LGTM, will commit later today if no other 
concerns raised. 

> NPE in Distributed Shell while publishing DS_CONTAINER_START event and other 
> miscellaneous issues
> -
>
> Key: YARN-5210
> URL: https://issues.apache.org/jira/browse/YARN-5210
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-2928-1st-milestone
> Attachments: YARN-5210-YARN-2928.01.patch
>
>
> Found a couple of issues while testing ATSv2.
> * There is a NPE while publishing DS_CONTAINER_START_EVENT which in turn 
> means that this event is not published.
> {noformat}
> 2016-06-07 23:19:00,020 
> [org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl #0] INFO 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl: Unchecked 
> exception is thrown from onContainerStarted for Container 
> container_e77_1465311876353_0007_01_02
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:389)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.putContainerEntity(ApplicationMaster.java:1284)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.publishContainerStartEvent(ApplicationMaster.java:1235)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster.access$1200(ApplicationMaster.java:175)
> at 
> org.apache.hadoop.yarn.applications.distributedshell.ApplicationMaster$NMCallbackHandler.onContainerStarted(ApplicationMaster.java:986)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:454)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer$StartContainerTransition.transition(NMClientAsyncImpl.java:436)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$StatefulContainer.handle(NMClientAsyncImpl.java:617)
> at 
> org.apache.hadoop.yarn.client.api.async.impl.NMClientAsyncImpl$ContainerEventProcessor.run(NMClientAsyncImpl.java:676)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> * Created time is not reported from distributed shell for both DS_CONTAINER 
> and DS_APP_ATTEMPT entities. 
> As can be seen below, when we query DS_APP_ATTEMPT entities, we do not get 
> createdtime in response.
> {code}
>   [
> {
>   "metrics": [ ],
>   "events": [ ],
>   "type": "DS_APP_ATTEMPT",
>   "id": "appattempt_1465246237936_0003_01",
>   "isrelatedto": { },
>   "relatesto": { },
>   "info": {
> "UID": 
> "yarn-cluster!application_1465246237936_0003!DS_APP_ATTEMPT!appattempt_1465246237936_0003_01"
>   },
>   "configs": { }
> }
>   ]
> {code}
> As can be seen from response received upon querying a DS_CONTAINER entity we 
> can see that createdtime is not present and DS_CONTAINER_START is not present 
> either(due to NPE pointed above).
> {code}
>   {
> "metrics": [ ],
> "events": [
>   {
> "id": "DS_CONTAINER_END",
> "timestamp": 1465314587480,
> "info": {
>   "Exit Status": 0,
>   "State": "COMPLETE"
> }
>   }
> ],
> "type": "DS_CONTAINER",
> "id": "container_e77_1465311876353_0003_01_02",
> "isrelatedto": { },
> "relatesto": { },
> "info": {
>   "UID": 
> "yarn-cluster!application_1465311876353_0003!DS_CONTAINER!container_e77_1465311876353_0003_01_02"
> },
> "configs": { }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: 

[jira] [Comment Edited] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321222#comment-15321222
 ] 

Vrushali C edited comment on YARN-5170 at 6/8/16 7:24 PM:
--

I applied the patch on my laptop with jdk 1.7 and all the tests in 
hadoop-yarn-server-timelineservice-hbase-tests and 
hadoop-yarn-server-timelineservice are passing. 

I am reviewing the patch, jumping around in the code.



was (Author: vrushalic):

I applied the patch on my laptop and all the tests in 
hadoop-yarn-server-timelineservice-hbase-tests and 
hadoop-yarn-server-timelineservice are passing. 

I am reviewing the patch, jumping around in the code.


> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5070) upgrade HBase version for first merge

2016-06-08 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321287#comment-15321287
 ] 

Vrushali C commented on YARN-5070:
--

I have been looking into hbase upgrade. Here is a summary:

Background:
ATS should make use of a table specific coprocessor for the Flow table (instead 
of system coprocessors). 

Problems in reaching that goal:
Table specific coprocessor requires dynamic loading of coprocessors to work for 
classes starting with org.apache.hadoop. Presently that is (fixed) allowed only 
in hbase 1.2.x.

Now, the version of phoenix that will work with hbase 1.2.x is still to be 
released (4.8.0).

Current options:
So now we have an option of upgrading to hbase 1.1.x. This version still does 
not allow for dynamic loading but it will be just good to upgrade in general. 

So now with the latest stable release 1.1.5, there are several dependency 
convergence errors.

Presently, I got the code (with some stubbed methods added) to compile with 
HBase 1.1.3. 

Current in progress steps with 1.1.3:
- 1.1.3 has API changes for the coprocessor, specifically the introduction of 
ScannerContext.ScannerContext instances encapsulate limit tracking AND progress 
towards those limits during invocations of next.  But this class has package 
private fields and I am looking into reflection to get those and update them. 

In short, this requires certain code changes at the level of the scan 
iterations which will need adequate testing.





> upgrade HBase version for first merge
> -
>
> Key: YARN-5070
> URL: https://issues.apache.org/jira/browse/YARN-5070
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Vrushali C
>Priority: Critical
>  Labels: yarn-2928-1st-milestone
>
> Currently we set the HBase version for the timeline service storage to 1.0.1. 
> This is a fairly old version, and there are reasons to upgrade to a newer 
> version. We should upgrade it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321255#comment-15321255
 ] 

Hadoop QA commented on YARN-2962:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 29s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 5 
new + 259 unchanged - 2 fixed = 264 total (was 261) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 21s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 32s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808983/YARN-2962.05.patch |
| JIRA Issue | YARN-2962 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux a0be6640fa6d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5a43583 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321233#comment-15321233
 ] 

Hadoop QA commented on YARN-5215:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 2 
new + 5 unchanged - 0 fixed = 7 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 34m 41s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808984/YARN-5215.000.patch |
| JIRA Issue | YARN-5215 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ceaf562e1a6e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5a43583 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 

[jira] [Commented] (YARN-5170) Eliminate singleton converters and static method access

2016-06-08 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321222#comment-15321222
 ] 

Vrushali C commented on YARN-5170:
--


I applied the patch on my laptop and all the tests in 
hadoop-yarn-server-timelineservice-hbase-tests and 
hadoop-yarn-server-timelineservice are passing. 

I am reviewing the patch, jumping around in the code.


> Eliminate singleton converters and static method access
> ---
>
> Key: YARN-5170
> URL: https://issues.apache.org/jira/browse/YARN-5170
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Varun Saxena
> Attachments: YARN-5170-YARN-2928.01.patch, 
> YARN-5170-YARN-2928.02.patch, YARN-5170-YARN-2928.03.patch, 
> YARN-5170-YARN-2928.04.patch, YARN-5170-YARN-2928.05.patch
>
>
> As part of YARN-5109 we introduced several KeyConverter classes.
> To stay consistent with the existing LongConverter in the sample patch I 
> created I made these other converter classes singleton as well.
> In conversation with [~sjlee0] who has a general dislike of singletons, we 
> discussed it is best to get rid of these singletons and make them simply 
> instance variables.
> There are other classes where the keys have static methods referring to a 
> singleton converter.
> Moreover, it turns out that due to code evolution we end up creating the same 
> keys several times.
> So general approach is to not re-instantiate rowkeys, converters when not 
> needed.
> I would like to create the byte[] rowKey in the RowKey classes their 
> constructor, but that would leak an incomplete object to the converter.
> There are a few method in TimelineStorageUtils that are used only once, or 
> only by one class, as part of this refactor I'll move these to keep the 
> "Utils" class as small as possible and keep them for truly generally used 
> utils that don't really belong anywhere else.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5191) Rename the “download=true” option for getLogs in NMWebServices and AHSWebServices

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321195#comment-15321195
 ] 

Hadoop QA commented on YARN-5191:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 279 unchanged - 2 fixed = 279 total (was 281) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 58s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 7s 
{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808988/YARN-5191.6.patch |
| JIRA Issue | YARN-5191 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 64dbcf60 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git 

[jira] [Updated] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-06-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4920:

Fix Version/s: (was: 2.8.0)
   2.9.0

> ATS/NM should support a link to dowload/get the logs in text format
> ---
>
> Key: YARN-4920
> URL: https://issues.apache.org/jira/browse/YARN-4920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-4920.2.patch, YARN-4920.20160424.branch-2.patch, 
> YARN-4920.3.patch, YARN-4920.4.patch, YARN-4920.5.patch, YARN-4920.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4920) ATS/NM should support a link to dowload/get the logs in text format

2016-06-08 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321138#comment-15321138
 ] 

Xuan Gong commented on YARN-4920:
-

Revert the commit from 2.8, and set the fix version to 2.9

> ATS/NM should support a link to dowload/get the logs in text format
> ---
>
> Key: YARN-4920
> URL: https://issues.apache.org/jira/browse/YARN-4920
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-4920.2.patch, YARN-4920.20160424.branch-2.patch, 
> YARN-4920.3.patch, YARN-4920.4.patch, YARN-4920.5.patch, YARN-4920.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321105#comment-15321105
 ] 

Naganarasimha G R commented on YARN-5215:
-

almost same set of queries from my end too... i think YARN-1011 is more 
comprehensive, thoughts?

> Scheduling containers based on load in the servers
> --
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4288) NodeManager restart should keep retrying to register to RM while connection exception happens during RM failed over.

2016-06-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321107#comment-15321107
 ] 

Junping Du commented on YARN-4288:
--

Sure. Please go ahead. Thanks Jason!

> NodeManager restart should keep retrying to register to RM while connection 
> exception happens during RM failed over.
> 
>
> Key: YARN-4288
> URL: https://issues.apache.org/jira/browse/YARN-4288
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Fix For: 2.8.0, 2.7.3
>
> Attachments: YARN-4288-v2.patch, YARN-4288-v3.patch, YARN-4288.patch
>
>
> When NM get restarted, NodeStatusUpdaterImpl will try to register to RM with 
> RPC which could throw following exceptions when RM get restarted at the same 
> time, like following exception shows:
> {noformat}
> 2015-08-17 14:35:59,434 ERROR nodemanager.NodeStatusUpdaterImpl 
> (NodeStatusUpdaterImpl.java:rebootNodeStatusUpdaterAndRegisterWithRM(222)) - 
> Unexpected error rebooting NodeStatusUpdater
> java.io.IOException: Failed on local exception: java.io.IOException: 
> Connection reset by peer; Host Details : local host is: "172.27.62.28"; 
> destination host is: "172.27.62.57":8025;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
> at org.apache.hadoop.ipc.Client.call(Client.java:1473)
> at org.apache.hadoop.ipc.Client.call(Client.java:1400)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy36.registerNodeManager(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy37.registerNodeManager(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:257)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.rebootNodeStatusUpdaterAndRegisterWithRM(NodeStatusUpdaterImpl.java:215)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager$2.run(NodeManager.java:304)
> Caused by: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
> at 
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:514)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1072)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:967)
> 2015-08-17 14:35:59,436 FATAL nodemanager.NodeManager 
> (NodeManager.java:run(307)) - Error while rebooting NodeStatusUpdater.
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: 
> Failed on local exception: java.io.IOException: Connection reset by peer; 
> Host Details : local host is: "172.27.62.28"; destination host is: 
> "172.27.62.57":8025;
> at 
> 

[jira] [Commented] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled

2016-06-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321098#comment-15321098
 ] 

Rohith Sharma K S commented on YARN-5208:
-

The Jenkins is not running and there is thread going on in common-dev mailing 
list. May be after some time or tomorrow need to trigger build to see QA report.

> Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens 
> TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled
> -
>
> Key: YARN-5208
> URL: https://issues.apache.org/jira/browse/YARN-5208
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: test
> Attachments: 0001-YARN-5208.patch, 0002-YARN-5208.patch, 
> 0003-YARN-5208.patch, 0004-YARN-5208.patch, 0005-YARN-5208.patch, 
> 0005-YARN-5208.patch
>
>
> All YARN test cases are running with *hadoop.security.token.service.use_ip* 
> disabled. As a result few tests {{TestAMRMClient TestNMClient TestYarnClient 
> TestClientRMTokens TestAMRMTokens}} cases are consistently failing because of 
> unable to resolve hostname(see HADOOP-12687 YARN-4306 YARN-4318)
> I would suggest to run tests with *hadoop.security.token.service.use_ip* 
> enabled by default. And for the HA test cases which require mandatory 
> disabling , change test cases as required by setting 
> {code}
> conf.setBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, false);
> SecurityUtil.setConfiguration(conf);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5191) Rename the “download=true” option for getLogs in NMWebServices and AHSWebServices

2016-06-08 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5191:

Attachment: YARN-5191.6.patch

> Rename the “download=true” option for getLogs in NMWebServices and 
> AHSWebServices
> -
>
> Key: YARN-5191
> URL: https://issues.apache.org/jira/browse/YARN-5191
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5191.1.patch, YARN-5191.2.patch, YARN-5191.3.patch, 
> YARN-5191.4.patch, YARN-5191.5.patch, YARN-5191.6.patch
>
>
> Rename the “download=true” option to instead be something like 
> “format=octet-stream”, so that we are explicit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321091#comment-15321091
 ] 

Carlo Curino commented on YARN-5215:


This sounds a generally good idea (patch obviously need work). It would be able 
for example to take into account of HDFS resource consumption, 
or more generally if other services are run on the same box and have 
non-constant resource utilization we would not need to pessimistically bound 
the 
resources given to YARN. 

Questions:
 # How do we ensure that there are no weird feedback loops, e.g., a task is 
schedule and consume lots of resources, and as a consequence the scheduler
lower the load on the node, and this task graphs even more resources? For 
CPU/Mem we might rely on enforcement, but what about adding non-enforced 
resources?
 # Would you also trigger preemption based on this? Or only avoid scheduling 
more load if the node is busy? 
 # What is the interplay between this and the work on Overcommit? 
 # Patch looks very simple/small for this feature, is that all there is needed 
here? More dependencies?
 # How do we test this till we are convinced works? (are you using it anywhere?)

[~kasha], [~kkaranasos], [~asuresh] can you guys comment on this?

> Scheduling containers based on load in the servers
> --
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4288) NodeManager restart should keep retrying to register to RM while connection exception happens during RM failed over.

2016-06-08 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-4288:
-
Fix Version/s: 2.7.3

Thanks, Junping!  We've seen AMRMClientImpl die with connection reset by peer 
instead of retrying in the RM proxy layer on 2.7, so I committed this to 
branch-2.7 as well.


> NodeManager restart should keep retrying to register to RM while connection 
> exception happens during RM failed over.
> 
>
> Key: YARN-4288
> URL: https://issues.apache.org/jira/browse/YARN-4288
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Fix For: 2.8.0, 2.7.3
>
> Attachments: YARN-4288-v2.patch, YARN-4288-v3.patch, YARN-4288.patch
>
>
> When NM get restarted, NodeStatusUpdaterImpl will try to register to RM with 
> RPC which could throw following exceptions when RM get restarted at the same 
> time, like following exception shows:
> {noformat}
> 2015-08-17 14:35:59,434 ERROR nodemanager.NodeStatusUpdaterImpl 
> (NodeStatusUpdaterImpl.java:rebootNodeStatusUpdaterAndRegisterWithRM(222)) - 
> Unexpected error rebooting NodeStatusUpdater
> java.io.IOException: Failed on local exception: java.io.IOException: 
> Connection reset by peer; Host Details : local host is: "172.27.62.28"; 
> destination host is: "172.27.62.57":8025;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
> at org.apache.hadoop.ipc.Client.call(Client.java:1473)
> at org.apache.hadoop.ipc.Client.call(Client.java:1400)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy36.registerNodeManager(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:68)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy37.registerNodeManager(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:257)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.rebootNodeStatusUpdaterAndRegisterWithRM(NodeStatusUpdaterImpl.java:215)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager$2.run(NodeManager.java:304)
> Caused by: java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
> at 
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
> at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
> at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:514)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
> at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1072)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:967)
> 2015-08-17 14:35:59,436 FATAL nodemanager.NodeManager 
> (NodeManager.java:run(307)) - Error while rebooting NodeStatusUpdater.
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: 
> Failed on local exception: java.io.IOException: Connection reset by peer; 
> Host Details : local host is: "172.27.62.28"; destination host is: 
> "172.27.62.57":8025;
> at 
> 

[jira] [Commented] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321085#comment-15321085
 ] 

Rohith Sharma K S commented on YARN-5215:
-

Is this more or less similar to YARN-1011?

> Scheduling containers based on load in the servers
> --
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5191) Rename the “download=true” option for getLogs in NMWebServices and AHSWebServices

2016-06-08 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321080#comment-15321080
 ] 

Xuan Gong commented on YARN-5191:
-

rebase the patch

> Rename the “download=true” option for getLogs in NMWebServices and 
> AHSWebServices
> -
>
> Key: YARN-5191
> URL: https://issues.apache.org/jira/browse/YARN-5191
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5191.1.patch, YARN-5191.2.patch, YARN-5191.3.patch, 
> YARN-5191.4.patch, YARN-5191.5.patch, YARN-5191.6.patch
>
>
> Rename the “download=true” option to instead be something like 
> “format=octet-stream”, so that we are explicit



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Inigo Goiri updated YARN-5215:
--
Attachment: YARN-5215.000.patch

First version to show the idea.

> Scheduling containers based on load in the servers
> --
>
> Key: YARN-5215
> URL: https://issues.apache.org/jira/browse/YARN-5215
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Inigo Goiri
> Attachments: YARN-5215.000.patch
>
>
> Currently YARN runs containers in the servers assuming that they own all the 
> resources. The proposal is to use the utilization information in the node and 
> the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321050#comment-15321050
 ] 

Hadoop QA commented on YARN-5113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
1s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 1m 51s {color} | 
{color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 2 unchanged - 1 
fixed = 3 total (was 3) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 17 
new + 388 unchanged - 43 fixed = 405 total (was 431) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
17s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 12s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 4 new + 159 unchanged - 4 fixed = 163 total (was 163) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 7 new + 274 unchanged - 7 fixed = 281 total (was 281) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 50s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 19s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 

[jira] [Updated] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode

2016-06-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-2962:
---
Attachment: YARN-2962.05.patch

> ZKRMStateStore: Limit the number of znodes under a znode
> 
>
> Key: YARN-2962
> URL: https://issues.apache.org/jira/browse/YARN-2962
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.6.0
>Reporter: Karthik Kambatla
>Assignee: Varun Saxena
>Priority: Critical
> Attachments: YARN-2962.01.patch, YARN-2962.04.patch, 
> YARN-2962.05.patch, YARN-2962.2.patch, YARN-2962.3.patch
>
>
> We ran into this issue where we were hitting the default ZK server message 
> size configs, primarily because the message had too many znodes even though 
> they individually they were all small.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5216) Expose configurable preemption policy for OPPORTUNISTIC containers runnig on the NM

2016-06-08 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5216:
-

 Summary: Expose configurable preemption policy for OPPORTUNISTIC 
containers runnig on the NM
 Key: YARN-5216
 URL: https://issues.apache.org/jira/browse/YARN-5216
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh


Currently, the default action taken by the QueuingContainerManager, introduce 
in YARN-2883, when a GUARANTEED Container is scheduled on an NM with 
OPPORTUNISTIC containers using up resources, is to KILL the running 
OPPORTUNISTIC containers.

This JIRA proposes to expose a configurable hook to allow the NM to take a 
different action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5215) Scheduling containers based on load in the servers

2016-06-08 Thread Inigo Goiri (JIRA)
Inigo Goiri created YARN-5215:
-

 Summary: Scheduling containers based on load in the servers
 Key: YARN-5215
 URL: https://issues.apache.org/jira/browse/YARN-5215
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Inigo Goiri


Currently YARN runs containers in the servers assuming that they own all the 
resources. The proposal is to use the utilization information in the node and 
the containers to estimate how much is actually available in the NMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4757) [Umbrella] Simplified discovery of services via DNS mechanisms

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15321004#comment-15321004
 ] 

Hadoop QA commented on YARN-4757:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 25s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
26s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} YARN-4757 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 43s 
{color} | {color:green} YARN-4757 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 14s {color} 
| {color:red} root generated 1 new + 698 unchanged - 0 fixed = 699 total (was 
698) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
37s {color} | {color:green} root: The patch generated 0 new + 1 unchanged - 46 
fixed = 1 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch 12 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 56s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 50s 
{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 2s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-1942) Many of ConverterUtils methods need to have public interfaces

2016-06-08 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-1942:
-
Attachment: YARN-1942.13.patch

[~jianhe], failed unit tests are related :(. Attached a new patch.

> Many of ConverterUtils methods need to have public interfaces
> -
>
> Key: YARN-1942
> URL: https://issues.apache.org/jira/browse/YARN-1942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Affects Versions: 2.4.0
>Reporter: Thomas Graves
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-1942-branch-2.0012.patch, YARN-1942.1.patch, 
> YARN-1942.10.patch, YARN-1942.11.patch, YARN-1942.12.patch, 
> YARN-1942.13.patch, YARN-1942.2.patch, YARN-1942.3.patch, YARN-1942.4.patch, 
> YARN-1942.5.patch, YARN-1942.6.patch, YARN-1942.8.patch, YARN-1942.9.patch
>
>
> ConverterUtils has a bunch of functions that are useful to application 
> masters.   It should either be made public or we make some of the utilities 
> in it public or we provide other external apis for application masters to 
> use.  Note that distributedshell and MR are both using these interfaces. 
> For instance the main use case I see right now is for getting the application 
> attempt id within the appmaster:
> String containerIdStr =
>   System.getenv(Environment.CONTAINER_ID.name());
> ConverterUtils.toContainerId
> ContainerId containerId = ConverterUtils.toContainerId(containerIdStr);
>   ApplicationAttemptId applicationAttemptId =
>   containerId.getApplicationAttemptId();
> I don't see any other way for the application master to get this information. 
>  If there is please let me know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5080) Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM

2016-06-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320966#comment-15320966
 ] 

Hudson commented on YARN-5080:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #9930 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9930/])
YARN-5080. Addendum fix to the original patch to fix YARN logs CLI. (vinodkv: 
rev 5a43583c0bbb9650ea6a9f48d9544ec3ec24b580)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java


> Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM
> -
>
> Key: YARN-5080
> URL: https://issues.apache.org/jira/browse/YARN-5080
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-5080.1.patch, YARN-5080.2.patch, YARN-5080.3.patch
>
>
> When the application is running, if we try to obtain AM logs using 
> {code}
> yarn logs -applicationId  -am 1
> {code}
> It throws the following error
> {code}
> Unable to get AM container informations for the application:
> Illegal character in scheme name at index 0: 0.0.0.0://
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5080) Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM

2016-06-08 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320912#comment-15320912
 ] 

Vinod Kumar Vavilapalli commented on YARN-5080:
---

Oh, and the unit-test failures reported are unrelated and have existing 
tickets..

> Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM
> -
>
> Key: YARN-5080
> URL: https://issues.apache.org/jira/browse/YARN-5080
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-5080.1.patch, YARN-5080.2.patch, YARN-5080.3.patch
>
>
> When the application is running, if we try to obtain AM logs using 
> {code}
> yarn logs -applicationId  -am 1
> {code}
> It throws the following error
> {code}
> Unable to get AM container informations for the application:
> Illegal character in scheme name at index 0: 0.0.0.0://
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5080) Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM

2016-06-08 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320911#comment-15320911
 ] 

Vinod Kumar Vavilapalli commented on YARN-5080:
---

Okay, the latest patch looks good to me. Tx for the note on manual testing, can 
see that it's hard to unit-test this.

+1, checking the addendum in.

> Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM
> -
>
> Key: YARN-5080
> URL: https://issues.apache.org/jira/browse/YARN-5080
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-5080.1.patch, YARN-5080.2.patch, YARN-5080.3.patch
>
>
> When the application is running, if we try to obtain AM logs using 
> {code}
> yarn logs -applicationId  -am 1
> {code}
> It throws the following error
> {code}
> Unable to get AM container informations for the application:
> Illegal character in scheme name at index 0: 0.0.0.0://
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5212) Run existing ContainerManager tests using QueuingContainerManagerImpl

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320910#comment-15320910
 ] 

Hadoop QA commented on YARN-5212:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 14s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 46s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808961/YARN-5212.002.patch |
| JIRA Issue | YARN-5212 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f1999289e5b6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3344ba7 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/11914/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11914/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Run existing ContainerManager tests using QueuingContainerManagerImpl
> -
>
> Key: YARN-5212
> URL: https://issues.apache.org/jira/browse/YARN-5212
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5212.001.patch, YARN-5212.002.patch
>
>
> The existing {{TestContainerManager}} test class will 

[jira] [Created] (YARN-5214) Pending on synchronized method DirectoryCollection#checkDirs can hang NM's NodeStatusUpdater

2016-06-08 Thread Junping Du (JIRA)
Junping Du created YARN-5214:


 Summary: Pending on synchronized method 
DirectoryCollection#checkDirs can hang NM's NodeStatusUpdater
 Key: YARN-5214
 URL: https://issues.apache.org/jira/browse/YARN-5214
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Junping Du
Assignee: Junping Du
Priority: Critical


In one cluster, we notice NM's heartbeat to RM is suddenly stopped and wait a 
while and marked LOST by RM. From the log, the NM daemon is still running, but 
jstack hints NM's NodeStatusUpdater thread get blocked:
1.  Node Status Updater thread get blocked by 0x8065eae8 
{noformat}
"Node Status Updater" #191 prio=5 os_prio=0 tid=0x7f0354194000 nid=0x26fa 
waiting for monitor entry [0x7f035945a000]
   java.lang.Thread.State: BLOCKED (on object monitor)
at 
org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.getFailedDirs(DirectoryCollection.java:170)
- waiting to lock <0x8065eae8> (a 
org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection)
at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getDisksHealthReport(LocalDirsHandlerService.java:287)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeHealthCheckerService.getHealthReport(NodeHealthCheckerService.java:58)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.getNodeStatus(NodeStatusUpdaterImpl.java:389)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.access$300(NodeStatusUpdaterImpl.java:83)
at 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl$1.run(NodeStatusUpdaterImpl.java:643)
at java.lang.Thread.run(Thread.java:745)
{noformat}

2. The actual holder of this lock is DiskHealthMonitor:
{noformat}
"DiskHealthMonitor-Timer" #132 daemon prio=5 os_prio=0 tid=0x7f0397393000 
nid=0x26bd runnable [0x7f035e511000]
   java.lang.Thread.State: RUNNABLE
at java.io.UnixFileSystem.createDirectory(Native Method)
at java.io.File.mkdir(File.java:1316)
at 
org.apache.hadoop.util.DiskChecker.mkdirsWithExistsCheck(DiskChecker.java:67)
at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:104)
at 
org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.verifyDirUsingMkdir(DirectoryCollection.java:340)
at 
org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.testDirs(DirectoryCollection.java:312)
at 
org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection.checkDirs(DirectoryCollection.java:231)
- locked <0x8065eae8> (a 
org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection)
at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.checkDirs(LocalDirsHandlerService.java:389)
at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.access$400(LocalDirsHandlerService.java:50)
at 
org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService$MonitoringTimerTask.run(LocalDirsHandlerService.java:122)
at java.util.TimerThread.mainLoop(Timer.java:555)
at java.util.TimerThread.run(Timer.java:505)
{noformat}

This disk operation could take longer time than expectation especially in high 
IO throughput case and we should have fine-grained lock for related operations 
here. 
The same issue on HDFS get raised and fixed in HDFS-7489, and we probably 
should have similar fix here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320892#comment-15320892
 ] 

Hadoop QA commented on YARN-5208:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 4s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808967/0005-YARN-5208.patch |
| JIRA Issue | YARN-5208 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11916/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens 
> TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled
> -
>
> Key: YARN-5208
> URL: https://issues.apache.org/jira/browse/YARN-5208
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: test
> Attachments: 0001-YARN-5208.patch, 0002-YARN-5208.patch, 
> 0003-YARN-5208.patch, 0004-YARN-5208.patch, 0005-YARN-5208.patch, 
> 0005-YARN-5208.patch
>
>
> All YARN test cases are running with *hadoop.security.token.service.use_ip* 
> disabled. As a result few tests {{TestAMRMClient TestNMClient TestYarnClient 
> TestClientRMTokens TestAMRMTokens}} cases are consistently failing because of 
> unable to resolve hostname(see HADOOP-12687 YARN-4306 YARN-4318)
> I would suggest to run tests with *hadoop.security.token.service.use_ip* 
> enabled by default. And for the HA test cases which require mandatory 
> disabling , change test cases as required by setting 
> {code}
> conf.setBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, false);
> SecurityUtil.setConfiguration(conf);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-06-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Attachment: YARN-5113.004.patch

New version of the patch.

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch, 
> YARN-5113.003.patch, YARN-5113.004.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled

2016-06-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5208:

Attachment: 0005-YARN-5208.patch

> Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens 
> TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled
> -
>
> Key: YARN-5208
> URL: https://issues.apache.org/jira/browse/YARN-5208
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: test
> Attachments: 0001-YARN-5208.patch, 0002-YARN-5208.patch, 
> 0003-YARN-5208.patch, 0004-YARN-5208.patch, 0005-YARN-5208.patch, 
> 0005-YARN-5208.patch
>
>
> All YARN test cases are running with *hadoop.security.token.service.use_ip* 
> disabled. As a result few tests {{TestAMRMClient TestNMClient TestYarnClient 
> TestClientRMTokens TestAMRMTokens}} cases are consistently failing because of 
> unable to resolve hostname(see HADOOP-12687 YARN-4306 YARN-4318)
> I would suggest to run tests with *hadoop.security.token.service.use_ip* 
> enabled by default. And for the HA test cases which require mandatory 
> disabling , change test cases as required by setting 
> {code}
> conf.setBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, false);
> SecurityUtil.setConfiguration(conf);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled

2016-06-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15320865#comment-15320865
 ] 

Hadoop QA commented on YARN-5208:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s {color} 
| {color:red} Docker failed to build yetus/hadoop:2c91fd8. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12808963/0005-YARN-5208.patch |
| JIRA Issue | YARN-5208 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11915/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens 
> TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled
> -
>
> Key: YARN-5208
> URL: https://issues.apache.org/jira/browse/YARN-5208
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: test
> Attachments: 0001-YARN-5208.patch, 0002-YARN-5208.patch, 
> 0003-YARN-5208.patch, 0004-YARN-5208.patch, 0005-YARN-5208.patch
>
>
> All YARN test cases are running with *hadoop.security.token.service.use_ip* 
> disabled. As a result few tests {{TestAMRMClient TestNMClient TestYarnClient 
> TestClientRMTokens TestAMRMTokens}} cases are consistently failing because of 
> unable to resolve hostname(see HADOOP-12687 YARN-4306 YARN-4318)
> I would suggest to run tests with *hadoop.security.token.service.use_ip* 
> enabled by default. And for the HA test cases which require mandatory 
> disabling , change test cases as required by setting 
> {code}
> conf.setBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, false);
> SecurityUtil.setConfiguration(conf);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5208) Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled

2016-06-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5208:

Attachment: 0005-YARN-5208.patch

Updated the patch fixing review comments.

@Reviewers, Kindly review the patch number {{0005-YARN-5208.patch}}. 0001-0003 
are test-patches to run Jenkins with different combinations.

> Run TestAMRMClient TestNMClient TestYarnClient TestClientRMTokens 
> TestAMAuthorization tests with hadoop.security.token.service.use_ip enabled
> -
>
> Key: YARN-5208
> URL: https://issues.apache.org/jira/browse/YARN-5208
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: test
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Blocker
>  Labels: test
> Attachments: 0001-YARN-5208.patch, 0002-YARN-5208.patch, 
> 0003-YARN-5208.patch, 0004-YARN-5208.patch, 0005-YARN-5208.patch
>
>
> All YARN test cases are running with *hadoop.security.token.service.use_ip* 
> disabled. As a result few tests {{TestAMRMClient TestNMClient TestYarnClient 
> TestClientRMTokens TestAMRMTokens}} cases are consistently failing because of 
> unable to resolve hostname(see HADOOP-12687 YARN-4306 YARN-4318)
> I would suggest to run tests with *hadoop.security.token.service.use_ip* 
> enabled by default. And for the HA test cases which require mandatory 
> disabling , change test cases as required by setting 
> {code}
> conf.setBoolean(
> CommonConfigurationKeys.HADOOP_SECURITY_TOKEN_SERVICE_USE_IP, false);
> SecurityUtil.setConfiguration(conf);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5212) Run existing ContainerManager tests using QueuingContainerManagerImpl

2016-06-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5212:
-
Attachment: YARN-5212.002.patch

Attaching new version.

> Run existing ContainerManager tests using QueuingContainerManagerImpl
> -
>
> Key: YARN-5212
> URL: https://issues.apache.org/jira/browse/YARN-5212
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5212.001.patch, YARN-5212.002.patch
>
>
> The existing {{TestContainerManager}} test class will be modified to be able 
> to use both the {{ContainerManagerImpl}} and the 
> {{QueuingContainerManagerImpl}} during the tests. This way we will make sure 
> that no regression was introduced in the existing cases by the 
> {{QueingContainerManagerImpl}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >