[jira] [Updated] (YARN-5548) Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart

2016-11-18 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5548:
---
Attachment: YARN-5548.0004.patch

Current failing test have tried with cycle of 100 and looks to be working fine

> Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy, test
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch, YARN-5548.0004.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5877) Allow all nm-whitelist-env to get overridden during launch

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678620#comment-15678620
 ] 

Hadoop QA commented on YARN-5877:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5877 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839678/YARN-5877.0002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c0ef740a5969 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7584fbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13978/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13978/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Allow all nm-whitelist-env to get overridden during launch
> --
>
> Key: YARN-5877
> URL: https://issues.apache.org/jira/browse/YARN-5877
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Dockerfile, YARN-5877.0001.patch, YARN-5877.0002.patch, 
> bootstrap.sh, yarn-site.xml
>
>
> As per the 

[jira] [Updated] (YARN-5877) Allow all nm-whitelist-env to get overridden during launch

2016-11-18 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5877:
---
Attachment: YARN-5877.0002.patch

Attaching patch after updating test cases.

> Allow all nm-whitelist-env to get overridden during launch
> --
>
> Key: YARN-5877
> URL: https://issues.apache.org/jira/browse/YARN-5877
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: Dockerfile, YARN-5877.0001.patch, YARN-5877.0002.patch, 
> bootstrap.sh, yarn-site.xml
>
>
> As per the {{yarn.nodemanager.env-whitelist}} for the configured values 
> should  containers may override rather than use NodeManager's default.
> {code}
>   
> Environment variables that containers may override rather 
> than use NodeManager's default.
> yarn.nodemanager.env-whitelist
> 
> JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME
>   
> {code}
> But only the following containers can override
> {code}
> whitelist.add(ApplicationConstants.Environment.HADOOP_YARN_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_COMMON_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_HDFS_HOME.name());
> whitelist.add(ApplicationConstants.Environment.HADOOP_CONF_DIR.name());
> whitelist.add(ApplicationConstants.Environment.JAVA_HOME.name());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-11-18 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678549#comment-15678549
 ] 

Sunil G commented on YARN-5761:
---

Thanks [~xgong]. Sorry jumping in late.

Few comments

1. {{getAndCheckLeafQueue}} Exception seems only related to Move Queue 
operation. It is a public api and hence we can give a general exception there.
2. Could YarnAuthorizationProvider instance in CapacitySchedulerQueueManager 
also be final. Are we supposed to change it during runtime?

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch, YARN-5761.3.patch, YARN-5761.4.patch, YARN-5761.5.patch, 
> YARN-5761.6.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5915) ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678534#comment-15678534
 ] 

Hadoop QA commented on YARN-5915:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5915 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839675/YARN-5915.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 59dc9cc3a3da 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7584fbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13977/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13977/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every 
> event write
> 
>
> Key: YARN-5915
> URL: https://issues.apache.org/jira/browse/YARN-5915
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Atul Sikaria
> Attachments: 

[jira] [Commented] (YARN-5915) ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write

2016-11-18 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678494#comment-15678494
 ] 

Atul Sikaria commented on YARN-5915:


Attached patch that should address this issue.

> ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every 
> event write
> 
>
> Key: YARN-5915
> URL: https://issues.apache.org/jira/browse/YARN-5915
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Atul Sikaria
> Attachments: YARN-5915.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5915) ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write

2016-11-18 Thread Atul Sikaria (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Atul Sikaria updated YARN-5915:
---
Attachment: YARN-5915.01.patch

> ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every 
> event write
> 
>
> Key: YARN-5915
> URL: https://issues.apache.org/jira/browse/YARN-5915
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Atul Sikaria
> Attachments: YARN-5915.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5899) A small fix for displaying debug info inside function canAssignToThisQueue()

2016-11-18 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678485#comment-15678485
 ] 

Sunil G commented on YARN-5899:
---

Could you please attach a patch here. 

> A small fix for displaying debug info inside function canAssignToThisQueue()
> 
>
> Key: YARN-5899
> URL: https://issues.apache.org/jira/browse/YARN-5899
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 3.0.0-alpha1
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Trivial
>
> A small fix inside function canAssignToThisQueue() for displaying DEBUG info. 
> Please see patch attached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5915) ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write

2016-11-18 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678475#comment-15678475
 ] 

Atul Sikaria edited comment on YARN-5915 at 11/19/16 3:13 AM:
--

This was seen previously as well, in YARN-4814. 

The issue is with writeEntities method in FileSystemTimelineWriter 
(https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java#L317).
 This calls getObjectMapper().writeValue(…), which does a flush() after every 
write with default config.

{noformat} 
@Override
public void writeValue(JsonGenerator jgen, Object value)
throws IOException, JsonGenerationException, JsonMappingException
{
SerializationConfig config = copySerializationConfig();
if (config.isEnabled(SerializationConfig.Feature.CLOSE_CLOSEABLE) && (value 
instanceof Closeable)) {
_writeCloseableValue(jgen, value, config);
} else {
_serializerProvider.serializeValue(config, jgen, value, 
_serializerFactory);
if 
(config.isEnabled(SerializationConfig.Feature.FLUSH_AFTER_WRITE_VALUE)) {
jgen.flush();
}
}
}
{noformat} 

On filesystems that map flush() to no-op or trivial operations, this is not a 
big deal. But on filesystems where flush() incurs a larger cost, this becomes a 
bottleneck for timeline events flow.

The fix is to set the property above (FLUSH_AFTER_WRITE_VALUE) to false, so the 
JSonGenerator does not do a flush after every JSon write.

The flush of the stream is done in a timer thread at configurable interval (10 
seconds by default). As [~jlowe] pointed out in YARN-4814, the timer thread 
also needs to also do a flush() on the JsonGenerator, to make sure the json 
serializer does not have any buffered data - so the hflush() in the timer 
thread actually flushes all the data seen so far.


was (Author: asikaria):
This was seen previously as well, in YARN-4814. 

The issue is with writeEntities method in FileSystemTimelineWriter 
(https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java#L317).
 This calls getObjectMapper().writeValue(…), which does a flush() after every 
write with default config.

{noformat} 
@Override
public void writeValue(JsonGenerator jgen, Object value)
throws IOException, JsonGenerationException, JsonMappingException
{
SerializationConfig config = copySerializationConfig();
if (config.isEnabled(SerializationConfig.Feature.CLOSE_CLOSEABLE) && (value 
instanceof Closeable)) {
_writeCloseableValue(jgen, value, config);
} else {
_serializerProvider.serializeValue(config, jgen, value, 
_serializerFactory);
if 
(config.isEnabled(SerializationConfig.Feature.FLUSH_AFTER_WRITE_VALUE)) {
jgen.flush();
}
}
}
{noformat} 

On filesystems that map flush() to no-op or trivial operations, this is not a 
big deal. But on filesystems where flush() incurs a larger cost, this becomes a 
bottleneck for timeline events flow.

> ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every 
> event write
> 
>
> Key: YARN-5915
> URL: https://issues.apache.org/jira/browse/YARN-5915
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Atul Sikaria
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5915) ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write

2016-11-18 Thread Atul Sikaria (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678475#comment-15678475
 ] 

Atul Sikaria commented on YARN-5915:


This was seen previously as well, in YARN-4814. 

The issue is with writeEntities method in FileSystemTimelineWriter 
(https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java#L317).
 This calls getObjectMapper().writeValue(…), which does a flush() after every 
write with default config.

{noformat} 
@Override
public void writeValue(JsonGenerator jgen, Object value)
throws IOException, JsonGenerationException, JsonMappingException
{
SerializationConfig config = copySerializationConfig();
if (config.isEnabled(SerializationConfig.Feature.CLOSE_CLOSEABLE) && (value 
instanceof Closeable)) {
_writeCloseableValue(jgen, value, config);
} else {
_serializerProvider.serializeValue(config, jgen, value, 
_serializerFactory);
if 
(config.isEnabled(SerializationConfig.Feature.FLUSH_AFTER_WRITE_VALUE)) {
jgen.flush();
}
}
}
{noformat} 

On filesystems that map flush() to no-op or trivial operations, this is not a 
big deal. But on filesystems where flush() incurs a larger cost, this becomes a 
bottleneck for timeline events flow.

> ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every 
> event write
> 
>
> Key: YARN-5915
> URL: https://issues.apache.org/jira/browse/YARN-5915
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Atul Sikaria
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5915) ATS 1.5 FileSystemTimelineWriter causes flush() to be called after every event write

2016-11-18 Thread Atul Sikaria (JIRA)
Atul Sikaria created YARN-5915:
--

 Summary: ATS 1.5 FileSystemTimelineWriter causes flush() to be 
called after every event write
 Key: YARN-5915
 URL: https://issues.apache.org/jira/browse/YARN-5915
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Affects Versions: 3.0.0-alpha1
Reporter: Atul Sikaria






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678425#comment-15678425
 ] 

Hadoop QA commented on YARN-5761:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 4 new + 877 unchanged - 17 fixed = 881 total (was 894) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 3 new + 935 unchanged - 0 fixed = 938 total (was 935) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 46m 55s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5761 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839663/YARN-5761.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 98cc9a13f15a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7584fbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13976/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/13976/artifact/patchprocess/diff-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13976/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-11-18 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678317#comment-15678317
 ] 

Xuan Gong commented on YARN-5761:
-

Thanks for the review. [~templedf]

Uploaded a new patch to address all your comments.

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch, YARN-5761.3.patch, YARN-5761.4.patch, YARN-5761.5.patch, 
> YARN-5761.6.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5761) Separate QueueManager from Scheduler

2016-11-18 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5761:

Attachment: YARN-5761.6.patch

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch, YARN-5761.3.patch, YARN-5761.4.patch, YARN-5761.5.patch, 
> YARN-5761.6.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5148) [YARN-3368] Add page to new YARN UI to view server side configurations/logs/JVM-metrics

2016-11-18 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678311#comment-15678311
 ] 

Kai Sasaki commented on YARN-5148:
--

[~sunilg] Thanks. Yes, as you said we can see pretty json in Chrome console. 
But template engine of ember (Handlebars?) does not render as we expected. It 
just print it as flat json (as left picture in the screen shot you attached). 
Even I replaced "\n" to "", it's in vain. I'm now keeping to investigate 
the good way to render JSON as pretty format. 

> [YARN-3368] Add page to new YARN UI to view server side 
> configurations/logs/JVM-metrics
> ---
>
> Key: YARN-5148
> URL: https://issues.apache.org/jira/browse/YARN-5148
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp, yarn-ui-v2
>Reporter: Wangda Tan
>Assignee: Kai Sasaki
>  Labels: oct16-medium
> Attachments: Screen Shot 2016-09-11 at 23.28.31.png, Screen Shot 
> 2016-09-13 at 22.27.00.png, UsingStringifyPrint.png, 
> YARN-5148-YARN-3368.01.patch, YARN-5148-YARN-3368.02.patch, 
> YARN-5148-YARN-3368.03.patch, YARN-5148-YARN-3368.04.patch, 
> YARN-5148-YARN-3368.05.patch, YARN-5148-YARN-3368.06.patch, 
> YARN-5148.07.patch, yarn-conf.png, yarn-tools.png
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5706) Fail to launch SLSRunner due to NPE

2016-11-18 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678299#comment-15678299
 ] 

Kai Sasaki commented on YARN-5706:
--

[~leftnoteasy] Thanks for review! Have you already merged it? Or should I 
rebase with trunk?

> Fail to launch SLSRunner due to NPE
> ---
>
> Key: YARN-5706
> URL: https://issues.apache.org/jira/browse/YARN-5706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: oct16-easy
> Attachments: YARN-5706.01.patch, YARN-5706.02.patch
>
>
> {code}
> java.lang.NullPointerException
>   at org.apache.hadoop.yarn.sls.web.SLSWebApp.(SLSWebApp.java:88)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.initMetrics(SLSCapacityScheduler.java:459)
>   at 
> org.apache.hadoop.yarn.sls.scheduler.SLSCapacityScheduler.setConf(SLSCapacityScheduler.java:153)
>   at 
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
> {code}
> CLASSPATH for html resource is not configured properly.
> {code}
> DEBUG: Injecting share/hadoop/tools/sls/html into CLASSPATH
> DEBUG: Rejected CLASSPATH: share/hadoop/tools/sls/html (does not exist)
> {code}
> This issue can be reproduced when doing according to the documentation 
> instruction.
> http://hadoop.apache.org/docs/current/hadoop-sls/SchedulerLoadSimulator.html
> {code}
> $ cd $HADOOP_ROOT/share/hadoop/tools/sls
> $ bin/slsrun.sh
>   --input-rumen |--input-sls=
>   --output-dir= [--nodes=]
> [--track-jobs=] [--print-simulation]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1964) Create Docker analog of the LinuxContainerExecutor in YARN

2016-11-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated YARN-1964:
---
Description: 
*This alpha feature has been deprecated in branch-2 and removed from trunk* 
Please see https://issues.apache.org/jira/browse/YARN-5388

Docker (https://www.docker.io/) is, increasingly, a very popular container 
technology.

In context of YARN, the support for Docker will provide a very elegant solution 
to allow applications to *package* their software into a Docker container 
(entire Linux file system incl. custom versions of perl, python etc.) and use 
it as a blueprint to launch all their YARN containers with requisite software 
environment. This provides both consistency (all YARN containers will have the 
same software environment) and isolation (no interference with whatever is 
installed on the physical machine).

  was:
Docker (https://www.docker.io/) is, increasingly, a very popular container 
technology.

In context of YARN, the support for Docker will provide a very elegant solution 
to allow applications to *package* their software into a Docker container 
(entire Linux file system incl. custom versions of perl, python etc.) and use 
it as a blueprint to launch all their YARN containers with requisite software 
environment. This provides both consistency (all YARN containers will have the 
same software environment) and isolation (no interference with whatever is 
installed on the physical machine).


> Create Docker analog of the LinuxContainerExecutor in YARN
> --
>
> Key: YARN-1964
> URL: https://issues.apache.org/jira/browse/YARN-1964
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.2.0
>Reporter: Arun C Murthy
>Assignee: Abin Shahab
> Fix For: 2.6.0
>
> Attachments: YARN-1964.patch, YARN-1964.patch, YARN-1964.patch, 
> YARN-1964.patch, YARN-1964.patch, YARN-1964.patch, YARN-1964.patch, 
> YARN-1964.patch, YARN-1964.patch, YARN-1964.patch, YARN-1964.patch, 
> yarn-1964-branch-2.2.0-docker.patch, yarn-1964-branch-2.2.0-docker.patch, 
> yarn-1964-docker.patch, yarn-1964-docker.patch, yarn-1964-docker.patch, 
> yarn-1964-docker.patch, yarn-1964-docker.patch
>
>
> *This alpha feature has been deprecated in branch-2 and removed from trunk* 
> Please see https://issues.apache.org/jira/browse/YARN-5388
> Docker (https://www.docker.io/) is, increasingly, a very popular container 
> technology.
> In context of YARN, the support for Docker will provide a very elegant 
> solution to allow applications to *package* their software into a Docker 
> container (entire Linux file system incl. custom versions of perl, python 
> etc.) and use it as a blueprint to launch all their YARN containers with 
> requisite software environment. This provides both consistency (all YARN 
> containers will have the same software environment) and isolation (no 
> interference with whatever is installed on the physical machine).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2466) Umbrella issue for Yarn launched Docker Containers

2016-11-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated YARN-2466:
---
Description: 
Docker (https://www.docker.io/) is, increasingly, a very popular container 
technology.

In context of YARN, the support for Docker will provide a very elegant solution 
to allow applications to package their software into a Docker container (entire 
Linux file system incl. custom versions of perl, python etc.) and use it as a 
blueprint to launch all their YARN containers with requisite software 
environment. This provides both consistency (all YARN containers will have the 
same software environment) and isolation (no interference with whatever is 
installed on the physical machine).

In addition to software isolation mentioned above, Docker containers will 
provide resource, network, and user-namespace isolation. 

Docker provides resource isolation through cgroups, similar to 
LinuxContainerExecutor. This prevents one job from taking other jobs 
resource(memory and CPU) on the same hadoop cluster. 

User-namespace isolation will ensure that the root on the container is mapped 
an unprivileged user on the host. This is currently being added to Docker.

Network isolation will ensure that one user’s network traffic is completely 
isolated from another user’s network traffic. 

Last but not the least, the interaction of Docker and Kerberos will have to be 
worked out. These Docker containers must work in a secure hadoop environment.

Additional details are here: 
https://wiki.apache.org/hadoop/dineshs/IsolatingYarnAppsInDockerContainers

  was:
*This has been deprecated and removed.* Please see 
https://issues.apache.org/jira/browse/YARN-5388 .

Docker (https://www.docker.io/) is, increasingly, a very popular container 
technology.

In context of YARN, the support for Docker will provide a very elegant solution 
to allow applications to package their software into a Docker container (entire 
Linux file system incl. custom versions of perl, python etc.) and use it as a 
blueprint to launch all their YARN containers with requisite software 
environment. This provides both consistency (all YARN containers will have the 
same software environment) and isolation (no interference with whatever is 
installed on the physical machine).

In addition to software isolation mentioned above, Docker containers will 
provide resource, network, and user-namespace isolation. 

Docker provides resource isolation through cgroups, similar to 
LinuxContainerExecutor. This prevents one job from taking other jobs 
resource(memory and CPU) on the same hadoop cluster. 

User-namespace isolation will ensure that the root on the container is mapped 
an unprivileged user on the host. This is currently being added to Docker.

Network isolation will ensure that one user’s network traffic is completely 
isolated from another user’s network traffic. 

Last but not the least, the interaction of Docker and Kerberos will have to be 
worked out. These Docker containers must work in a secure hadoop environment.

Additional details are here: 
https://wiki.apache.org/hadoop/dineshs/IsolatingYarnAppsInDockerContainers


> Umbrella issue for Yarn launched Docker Containers
> --
>
> Key: YARN-2466
> URL: https://issues.apache.org/jira/browse/YARN-2466
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.4.1
>Reporter: Abin Shahab
>Assignee: Abin Shahab
>
> Docker (https://www.docker.io/) is, increasingly, a very popular container 
> technology.
> In context of YARN, the support for Docker will provide a very elegant 
> solution to allow applications to package their software into a Docker 
> container (entire Linux file system incl. custom versions of perl, python 
> etc.) and use it as a blueprint to launch all their YARN containers with 
> requisite software environment. This provides both consistency (all YARN 
> containers will have the same software environment) and isolation (no 
> interference with whatever is installed on the physical machine).
> In addition to software isolation mentioned above, Docker containers will 
> provide resource, network, and user-namespace isolation. 
> Docker provides resource isolation through cgroups, similar to 
> LinuxContainerExecutor. This prevents one job from taking other jobs 
> resource(memory and CPU) on the same hadoop cluster. 
> User-namespace isolation will ensure that the root on the container is mapped 
> an unprivileged user on the host. This is currently being added to Docker.
> Network isolation will ensure that one user’s network traffic is completely 
> isolated from another user’s network traffic. 
> Last but not the least, the interaction of Docker and Kerberos will have to 
> be worked out. These Docker containers must work in a secure 

[jira] [Updated] (YARN-2466) Umbrella issue for Yarn launched Docker Containers

2016-11-18 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated YARN-2466:
---
Description: 
*This has been deprecated and removed.* Please see 
https://issues.apache.org/jira/browse/YARN-5388 .

Docker (https://www.docker.io/) is, increasingly, a very popular container 
technology.

In context of YARN, the support for Docker will provide a very elegant solution 
to allow applications to package their software into a Docker container (entire 
Linux file system incl. custom versions of perl, python etc.) and use it as a 
blueprint to launch all their YARN containers with requisite software 
environment. This provides both consistency (all YARN containers will have the 
same software environment) and isolation (no interference with whatever is 
installed on the physical machine).

In addition to software isolation mentioned above, Docker containers will 
provide resource, network, and user-namespace isolation. 

Docker provides resource isolation through cgroups, similar to 
LinuxContainerExecutor. This prevents one job from taking other jobs 
resource(memory and CPU) on the same hadoop cluster. 

User-namespace isolation will ensure that the root on the container is mapped 
an unprivileged user on the host. This is currently being added to Docker.

Network isolation will ensure that one user’s network traffic is completely 
isolated from another user’s network traffic. 

Last but not the least, the interaction of Docker and Kerberos will have to be 
worked out. These Docker containers must work in a secure hadoop environment.

Additional details are here: 
https://wiki.apache.org/hadoop/dineshs/IsolatingYarnAppsInDockerContainers

  was:
Docker (https://www.docker.io/) is, increasingly, a very popular container 
technology.

In context of YARN, the support for Docker will provide a very elegant solution 
to allow applications to package their software into a Docker container (entire 
Linux file system incl. custom versions of perl, python etc.) and use it as a 
blueprint to launch all their YARN containers with requisite software 
environment. This provides both consistency (all YARN containers will have the 
same software environment) and isolation (no interference with whatever is 
installed on the physical machine).

In addition to software isolation mentioned above, Docker containers will 
provide resource, network, and user-namespace isolation. 

Docker provides resource isolation through cgroups, similar to 
LinuxContainerExecutor. This prevents one job from taking other jobs 
resource(memory and CPU) on the same hadoop cluster. 

User-namespace isolation will ensure that the root on the container is mapped 
an unprivileged user on the host. This is currently being added to Docker.

Network isolation will ensure that one user’s network traffic is completely 
isolated from another user’s network traffic. 

Last but not the least, the interaction of Docker and Kerberos will have to be 
worked out. These Docker containers must work in a secure hadoop environment.

Additional details are here: 
https://wiki.apache.org/hadoop/dineshs/IsolatingYarnAppsInDockerContainers


> Umbrella issue for Yarn launched Docker Containers
> --
>
> Key: YARN-2466
> URL: https://issues.apache.org/jira/browse/YARN-2466
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.4.1
>Reporter: Abin Shahab
>Assignee: Abin Shahab
>
> *This has been deprecated and removed.* Please see 
> https://issues.apache.org/jira/browse/YARN-5388 .
> Docker (https://www.docker.io/) is, increasingly, a very popular container 
> technology.
> In context of YARN, the support for Docker will provide a very elegant 
> solution to allow applications to package their software into a Docker 
> container (entire Linux file system incl. custom versions of perl, python 
> etc.) and use it as a blueprint to launch all their YARN containers with 
> requisite software environment. This provides both consistency (all YARN 
> containers will have the same software environment) and isolation (no 
> interference with whatever is installed on the physical machine).
> In addition to software isolation mentioned above, Docker containers will 
> provide resource, network, and user-namespace isolation. 
> Docker provides resource isolation through cgroups, similar to 
> LinuxContainerExecutor. This prevents one job from taking other jobs 
> resource(memory and CPU) on the same hadoop cluster. 
> User-namespace isolation will ensure that the root on the container is mapped 
> an unprivileged user on the host. This is currently being added to Docker.
> Network isolation will ensure that one user’s network traffic is completely 
> isolated from another user’s network traffic. 
> Last but not the least, the 

[jira] [Commented] (YARN-5792) adopt the id prefix for YARN, MR, and DS entities

2016-11-18 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678167#comment-15678167
 ] 

Sangjin Lee commented on YARN-5792:
---

The latest patch LGTM.

I'll wait until next Monday (if you don't mind) so others have a chance to 
chime in before I commit. Thanks!

> adopt the id prefix for YARN, MR, and DS entities
> -
>
> Key: YARN-5792
> URL: https://issues.apache.org/jira/browse/YARN-5792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
> Attachments: YARN-5792-YARN-5355.01.patch, 
> YARN-5792-YARN-5355.02.patch, YARN-5792-YARN-5355.03.patch, 
> YARN-5792-YARN-5355.04.patch, YARN-5792-YARN-5355.05.patch, 
> YARN-5792-YARN-5355.06.patch
>
>
> We introduced the entity id prefix to support flexible entity sorting 
> (YARN-5715). We should adopt the id prefix for YARN entities, MR entities, 
> and DS entities to take advantage of the id prefix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-18 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15678114#comment-15678114
 ] 

sandflee commented on YARN-5911:


thanks [~varun_saxena], one minor comment:
if events not drained, we should at least wait 1s since drainDispatcher not 
invoke waitForDrained.notify, could we use a shorter time?
{code}
while (!isDrained() && eventHandlingThread != null
&& eventHandlingThread.isAlive()
&& System.currentTimeMillis() < endTime) {
  waitForDrained.wait(1000);
  LOG.info("Waiting for AsyncDispatcher to drain. Thread state is :" +
  eventHandlingThread.getState());
}
  }
{code}

> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5911.01.patch
>
>
> DrainDispatcher#serviceStop sets the stopped flag first before draining the 
> event queue.
> This means that the thread terminates as soon as it encounters stopped flag 
> as true and does not continue to process leftover events in queue, something 
> which it should do if setDrainEventsOnStop is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Capacity Scheduler: Support user-specific minimum user limit percent

2016-11-18 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677954#comment-15677954
 ] 

Wangda Tan commented on YARN-5892:
--

Linked this JIRA to YARN-5889, I think this is more like a special case of 
YARN-5889, but we need to make API to be extensible and easier to use.

+ [~sunilg]

> Capacity Scheduler: Support user-specific minimum user limit percent
> 
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5910) Support for multi-cluster delegation tokens

2016-11-18 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1569#comment-1569
 ] 

Allen Wittenauer commented on YARN-5910:


Related: 3.0.0-alpha1 added 'hadoop dtutil' and the hadoop.token.files 
property.  Between the two of them, it's very possible for end users to provide 
multiple DTs for multiple (and unrelated) clusters at job submission time.

> Support for multi-cluster delegation tokens
> ---
>
> Key: YARN-5910
> URL: https://issues.apache.org/jira/browse/YARN-5910
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: security
>Reporter: Clay B.
>Priority: Minor
>
> As an administrator running many secure (kerberized) clusters, some which 
> have peer clusters managed by other teams, I am looking for a way to run jobs 
> which may require services running on other clusters. Particular cases where 
> this rears itself are running something as core as a distcp between two 
> kerberized clusters (e.g. {{hadoop --config /home/user292/conf/ distcp 
> hdfs://LOCALCLUSTER/user/user292/test.out 
> hdfs://REMOTECLUSTER/user/user292/test.out.result}}).
> Thanks to YARN-3021, once can run for a while but if the delegation token for 
> the remote cluster needs renewal the job will fail[1]. One can pre-configure 
> their {{hdfs-site.xml}} loaded by the YARN RM to know of all possible HDFSes 
> available but that requires coordination that is not always feasible, 
> especially as a cluster's peers grow into the tens of clusters or across 
> management teams. Ideally, one could have core systems configured this way 
> but jobs could also specify their own handling of tokens and management when 
> needed?
> [1]: Example stack trace when the RM is unaware of a remote service:
> 
> {code}
> 2016-03-23 14:59:50,528 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  application_1458441356031_3317 found existing hdfs token Kind: 
> HDFS_DELEGATION_TOKEN, Service: ha-hdfs:REMOTECLUSTER, Ident: 
> (HDFS_DELEGATION_TOKEN token
>  10927 for user292)
> 2016-03-23 14:59:50,557 WARN 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer:
>  Unable to add the application to the delegation token renewer.
> java.io.IOException: Failed to renew token: Kind: HDFS_DELEGATION_TOKEN, 
> Service: ha-hdfs:REMOTECLUSTER, Ident: (HDFS_DELEGATION_TOKEN token 10927 for 
> user292)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:427)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.access$700(DelegationTokenRenewer.java:78)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.handleDTRenewerAppSubmitEvent(DelegationTokenRenewer.java:781)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable.run(DelegationTokenRenewer.java:762)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.io.IOException: Unable to map logical nameservice URI 
> 'hdfs://REMOTECLUSTER' to a NameNode. Local configuration does not have a 
> failover proxy provider configured.
> at org.apache.hadoop.hdfs.DFSClient$Renewer.getNNProxy(DFSClient.java:1164)
> at org.apache.hadoop.hdfs.DFSClient$Renewer.renew(DFSClient.java:1128)
> at org.apache.hadoop.security.token.Token.renew(Token.java:377)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:516)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$1.run(DelegationTokenRenewer.java:513)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.renewToken(DelegationTokenRenewer.java:511)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.handleAppSubmitEvent(DelegationTokenRenewer.java:425)
> ... 6 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5886) Dynamically prioritize execution of opportunistic containers (NM queue reordering)

2016-11-18 Thread Wei Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1564#comment-1564
 ] 

Wei Chen commented on YARN-5886:


hi, 
This feature is very interesting. And also Microsoft also published a paper in 
ATC to talk about this feature. Here is some of my concerns.

1. How the local NM CotnaienrScheduler coordinate with global scheduler. since 
global scheduler will try to keep fair and grarantee share across the 
applications(queue).

2. Nodemanger may not know(or estimate) the runtime for queued container. 
Falsely estimation(mistake a long-running as a short-running) may cause serious 
results.(inverse priority?)

> Dynamically prioritize execution of opportunistic containers (NM queue 
> reordering)
> --
>
> Key: YARN-5886
> URL: https://issues.apache.org/jira/browse/YARN-5886
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>
> Currently the {{ContainerScheduler}} in the NM picks the next queued 
> opportunistic container to be executed in a FIFO manner. That is, we first 
> execute containers that arrived first at the NM.
> This JIRA proposes to add pluggable queue reordering strategies at the NM 
> that will dynamically determine which opportunistic container will be 
> executed next.
> For example, we can choose to prioritize containers that belong to jobs which 
> are closer to completion, or containers that are short-running (if such 
> information is available).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5914) NodeManager will report "Error: No such image, container or task" when DockerContainerLauncher launches a container

2016-11-18 Thread Wei Chen (JIRA)
Wei Chen created YARN-5914:
--

 Summary: NodeManager will report "Error: No such image, container 
or task" when DockerContainerLauncher launches a container
 Key: YARN-5914
 URL: https://issues.apache.org/jira/browse/YARN-5914
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 2.6.0
 Environment: Ubuntu 16.04, Docker1.12.1, 
Spark-2.0.1,Spark-1.6.2,Hadoop2.6.0,Hadoop-2.7.1
Reporter: Wei Chen


Hi, all

I have tested MapReduce and Spark(1.6.2,2.0.1) with Docker execution enabled.  
I found DockerContainerExecutor will report "Error: No such image, container or 
task" each time it launch a task container.  Then I checked the 
docker_container_executor_session.sh

and find this:

echo `/usr/bin/docker inspect --format {{.State.Pid}} 
container_1479428705108_0002_01_01` > 
/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir/nmPrivate/application_1479428705108_0002/container_1479428705108_0002_01_01/container_1479428705108_0002_01_01.pid.tmp

/bin/mv -f 
/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir/nmPrivate/application_1479428705108_0002/container_1479428705108_0002_01_01/container_1479428705108_0002_01_01.pid.tmp
 
/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir/nmPrivate/application_1479428705108_0002/container_1479428705108_0002_01_01/container_1479428705108_0002_01_01.pid


/usr/bin/docker run --memory=1024m --memory-swap -1 -it --net=host  --name 
container_1479428705108_0002_01_01 -v 
/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir:/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir
 -v 
/home/cwei/project/hadoop-2.7.3/logs/userlogs:/home/cwei/project/hadoop-2.7.3/logs/userlogs
 -v 
/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir/usercache/cwei/appcache/application_1479428705108_0002/container_1479428705108_0002_01_01:/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir/usercache/cwei/appcache/application_1479428705108_0002/container_1479428705108_0002_01_01
 sequenceiq/hadoop-docker:2.7.1 bash 
"/home/cwei/project/hadoop-2.7.3/yarn-temp/nm-local-dir/usercache/cwei/appcache/application_1479428705108_0002/container_1479428705108_0002_01_01/launch_container.sh"


Since `/usr/bin/docker inspect --format {{.State.Pid}} 
container_1479428705108_0002_01_01` is called before the container is 
launched by calling `docker run...` , so it always cause this error log message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-18 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677692#comment-15677692
 ] 

Konstantinos Karanasos commented on YARN-5646:
--

I understand the concern about future work. I added it on purpose, so that 
people that read the documentation, can get an idea of open items (and even 
contribute to them).
But if you all think it's not suitable, I can remove it.

[~kasha], I also included the motivation for over-commitment through 
opportunistic containers, but made clear in the text that we do not yet support 
it. Once over-commitment is also available, we will update the document.

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4108) CapacityScheduler: Improve preemption to only kill containers that would satisfy the incoming request

2016-11-18 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-4108:
-
Fix Version/s: 2.8.0

Thanks [~leftnoteasy] . I also backported this to branch-2.8

> CapacityScheduler: Improve preemption to only kill containers that would 
> satisfy the incoming request
> -
>
> Key: YARN-4108
> URL: https://issues.apache.org/jira/browse/YARN-4108
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-4108-design-doc-V3.pdf, 
> YARN-4108-design-doc-v1.pdf, YARN-4108-design-doc-v2.pdf, YARN-4108.1.patch, 
> YARN-4108.10.patch, YARN-4108.11.patch, YARN-4108.2.patch, YARN-4108.3.patch, 
> YARN-4108.4.patch, YARN-4108.5.patch, YARN-4108.6.patch, YARN-4108.7.patch, 
> YARN-4108.8.patch, YARN-4108.9.patch, YARN-4108.poc.1.patch, 
> YARN-4108.poc.2-WIP.patch, YARN-4108.poc.3-WIP.patch, 
> YARN-4108.poc.4-WIP.patch
>
>
> This is sibling JIRA for YARN-2154. We should make sure container preemption 
> is more effective.
> *Requirements:*:
> 1) Can handle case of user-limit preemption
> 2) Can handle case of resource placement requirements, such as: hard-locality 
> (I only want to use rack-1) / node-constraints (YARN-3409) / black-list (I 
> don't want to use rack1 and host\[1-3\])
> 3) Can handle preemption within a queue: cross user preemption (YARN-2113), 
> cross applicaiton preemption (such as priority-based (YARN-1963) / 
> fairness-based (YARN-3319)).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-18 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677517#comment-15677517
 ] 

Arun Suresh commented on YARN-5646:
---

I like the future work part, but don't know if we should have that section...
This is more a manual for usage for end users... not sure if it must be linked 
to JIRA tickets.

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-18 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677361#comment-15677361
 ] 

Arun Suresh commented on YARN-5646:
---

[~kasha], [~leftnoteasy], [~jianhe].. do let us know what you think of the 
documentation.

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5646) Documentation for scheduling of OPPORTUNISTIC containers

2016-11-18 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677362#comment-15677362
 ] 

Arun Suresh commented on YARN-5646:
---

[~subru] too..

> Documentation for scheduling of OPPORTUNISTIC containers
> 
>
> Key: YARN-5646
> URL: https://issues.apache.org/jira/browse/YARN-5646
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5646.001.patch
>
>
> This is for adding documentation regarding the scheduling of OPPORTUNISTIC 
> containers.
> It includes both the centralized (YARN-5220) and the distributed (YARN-2877) 
> scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5792) adopt the id prefix for YARN, MR, and DS entities

2016-11-18 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677355#comment-15677355
 ] 

Varun Saxena commented on YARN-5792:


Checkstyle issues are related to constructor param length and cannot be fixed.

> adopt the id prefix for YARN, MR, and DS entities
> -
>
> Key: YARN-5792
> URL: https://issues.apache.org/jira/browse/YARN-5792
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
> Attachments: YARN-5792-YARN-5355.01.patch, 
> YARN-5792-YARN-5355.02.patch, YARN-5792-YARN-5355.03.patch, 
> YARN-5792-YARN-5355.04.patch, YARN-5792-YARN-5355.05.patch, 
> YARN-5792-YARN-5355.06.patch
>
>
> We introduced the entity id prefix to support flexible entity sorting 
> (YARN-5715). We should adopt the id prefix for YARN entities, MR entities, 
> and DS entities to take advantage of the id prefix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5667) Move HBase backend code in ATS v2 into its separate module

2016-11-18 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677340#comment-15677340
 ] 

Haibo Chen commented on YARN-5667:
--

My first version of the patch is actually created for trunk. Based on that 
experience, it should be manageable to create one for trunk and another for 
yarn-5355. To make things easier and move quicker, I'll break this change into 
a few pieces and create a subtask for each of them.

> Move HBase backend code in ATS v2  into its separate module
> ---
>
> Key: YARN-5667
> URL: https://issues.apache.org/jira/browse/YARN-5667
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Blocker
> Attachments: New module structure.png, part1.yarn5667.prelim.patch, 
> part2.yarn5667.prelim.patch, part3.yarn5667.prelim.patch, 
> part4.yarn5667.prelim.patch, part5.yarn5667.prelim.patch, 
> pt1.yarn5667.001.patch, pt2.yarn5667.001.patch, pt3.yarn5667.001.patch, 
> pt4.yarn5667.001.patch, pt5.yarn5667.001.patch, pt6.yarn5667.001.patch, 
> pt9.yarn5667.001.patch, yarn5667-001.tar.gz
>
>
> The HBase backend code currently lives along with the core ATS v2 code in 
> hadoop-yarn-server-timelineservice module. Because Resource Manager depends 
> on hadoop-yarn-server-timelineservice, an unnecessary dependency of the RM 
> module on HBase modules is introduced (HBase backend is pluggable, so we do 
> not need to directly pull in HBase jars). 
> In our internal effort to try ATS v2 with HBase 2.0 which depends on Hadoop 
> 3, we encountered a circular dependency during our builds between HBase2.0 
> and Hadoop3 artifacts.
> {code}
> hadoop-mapreduce-client-common, hadoop-yarn-client, 
> hadoop-yarn-server-resourcemanager, hadoop-yarn-server-timelineservice, 
> hbase-server, hbase-prefix-tree, hbase-hadoop2-compat, 
> hadoop-mapreduce-client-jobclient, hadoop-mapreduce-client-common]
> {code}
> This jira proposes we move all HBase-backend-related code from 
> hadoop-yarn-server-timelineservice into its own module (possible name is 
> yarn-server-timelineservice-storage) so that core RM modules do not depend on 
> HBase modules any more.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-5913) Consolidate "resource" and "amResourceRequest" in ApplicationSubmissionContext

2016-11-18 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu moved MAPREDUCE-6813 to YARN-5913:
---

Component/s: (was: resourcemanager)
 resourcemanager
Key: YARN-5913  (was: MAPREDUCE-6813)
Project: Hadoop YARN  (was: Hadoop Map/Reduce)

> Consolidate "resource" and "amResourceRequest" in ApplicationSubmissionContext
> --
>
> Key: YARN-5913
> URL: https://issues.apache.org/jira/browse/YARN-5913
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Yufei Gu
>Priority: Minor
>  Labels: newbie
>
> Usage of these two variables overlaps and causes confusion. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5911) DrainDispatcher does not drain all events on stop even if setDrainEventsOnStop is true

2016-11-18 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15677090#comment-15677090
 ] 

Bibin A Chundatt commented on YARN-5911:


Thank you [~varun_saxena] for  jira.
Overall patch looks good to me. Minor comment.
# Rename the test to {{testDrainDispatcherDrainEventsOnStop}} to make testcase 
more clear.


> DrainDispatcher does not drain all events on stop even if 
> setDrainEventsOnStop is true
> --
>
> Key: YARN-5911
> URL: https://issues.apache.org/jira/browse/YARN-5911
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5911.01.patch
>
>
> DrainDispatcher#serviceStop sets the stopped flag first before draining the 
> event queue.
> This means that the thread terminates as soon as it encounters stopped flag 
> as true and does not continue to process leftover events in queue, something 
> which it should do if setDrainEventsOnStop is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5859) TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource sometimes fails

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676921#comment-15676921
 ] 

Hadoop QA commented on YARN-5859:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 92 unchanged - 2 fixed = 95 total (was 94) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
36s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5859 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839563/YARN-5859.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8cc40835e5bd 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6ffa11 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13975/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13975/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13975/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource 
> sometimes fails
> 

[jira] [Updated] (YARN-5902) yarn.scheduler.increment-allocation-mb and yarn.scheduler.increment-allocation-vcores are undocumented in yarn-default.xml

2016-11-18 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5902:
---
Summary: yarn.scheduler.increment-allocation-mb and 
yarn.scheduler.increment-allocation-vcores are undocumented in yarn-default.xml 
 (was: yarn.scheduler.increment-allocation-mb and 
yarn.scheduler.increment-allocation-vcores are undocumented)

> yarn.scheduler.increment-allocation-mb and 
> yarn.scheduler.increment-allocation-vcores are undocumented in 
> yarn-default.xml
> --
>
> Key: YARN-5902
> URL: https://issues.apache.org/jira/browse/YARN-5902
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5902.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-4770) Auto-restart of containers should work across NM restarts.

2016-11-18 Thread Jun Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Gong resolved YARN-4770.

Resolution: Not A Bug

> Auto-restart of containers should work across NM restarts.
> --
>
> Key: YARN-4770
> URL: https://issues.apache.org/jira/browse/YARN-4770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> See my comment 
> [here|https://issues.apache.org/jira/browse/YARN-3998?focusedCommentId=15133367=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15133367]
>  on YARN-3998. Need to take care of two things:
>  - The relaunch feature needs to work across NM restarts, so we should save 
> the retry-context and policy per container into the state-store and reload it 
> for continue relaunching after NM restart.
>  - We should also handle restarting of any containers that may have crashed 
> during the NM reboot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5859) TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource sometimes fails

2016-11-18 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5859:
--
Attachment: YARN-5859.002.patch

Changed all timeouts to be 5 seconds within the test. 

> TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource 
> sometimes fails
> -
>
> Key: YARN-5859
> URL: https://issues.apache.org/jira/browse/YARN-5859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Eric Badger
> Attachments: YARN-5859.001.patch, YARN-5859.002.patch
>
>
> Saw the following test failure:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.011 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> testParallelDownloadAttemptsForPublicResource(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
>   Time elapsed: 0.586 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testParallelDownloadAttemptsForPublicResource(TestResourceLocalizationService.java:2108)
> {noformat}
> The assert occurred at this place in the code:
> {code}
>   // Waiting for download to start.
>   Assert.assertTrue(waitForPublicDownloadToStart(spyService, 1, 200));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-11-18 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676823#comment-15676823
 ] 

Andras Bokor commented on YARN-4994:


[~jhung],
Thanks for taking a look on it. I rebased my patch. The [~hadoopqa] -1s seem 
unrelated:
* The two JUnit fail seems unrelated and will be fixed with YARN-5728 and 
YARN-5851.
* The checkstyle warning is unrelted.
* Findbugs issue seems the same problem as YARN-5138.

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: oct16-easy
> Fix For: 2.7.0
>
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch, YARN-4994.07.patch, YARN-4994.08.patch, YARN-4994.09.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5859) TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource sometimes fails

2016-11-18 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676809#comment-15676809
 ] 

Jason Lowe commented on YARN-5859:
--

Thanks for the patch!  I noticed there are other rather low timeouts in these 
tests that weren't updated like these, and I assume they too could also fail if 
the test machine hiccups.
{code}
  // Resource Localization should fail and state is modified accordingly.
  // Also Local should be release on the LocalizedResource.
  Assert
.assertTrue(waitForResourceState(lr, rls, req,
  LocalResourceVisibility.PRIVATE, user, appId, ResourceState.FAILED,
  200));

[...]

  // Now waiting for resource download to start. Here actual will not start
  // Only the resources will be populated into pending list.
  Assert
.assertTrue(waitForPrivateDownloadToStart(rls, localizerId1, 2, 500));

[...]

  // Waiting for download to start. This should return false as new download
  // will not start
  Assert.assertFalse(waitForPublicDownloadToStart(spyService, 2, 100));

[...]

  // Waiting for download to start. This should return false as new download
  // will not start
  Assert.assertFalse(waitForPublicDownloadToStart(spyService, 1, 100));
{code}


> TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource 
> sometimes fails
> -
>
> Key: YARN-5859
> URL: https://issues.apache.org/jira/browse/YARN-5859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>Assignee: Eric Badger
> Attachments: YARN-5859.001.patch
>
>
> Saw the following test failure:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.011 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> testParallelDownloadAttemptsForPublicResource(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
>   Time elapsed: 0.586 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testParallelDownloadAttemptsForPublicResource(TestResourceLocalizationService.java:2108)
> {noformat}
> The assert occurred at this place in the code:
> {code}
>   // Waiting for download to start.
>   Assert.assertTrue(waitForPublicDownloadToStart(spyService, 1, 200));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4770) Auto-restart of containers should work across NM restarts.

2016-11-18 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676781#comment-15676781
 ] 

Jun Gong commented on YARN-4770:


Hi [~jianhe], I just tested again and confirmed it: container would relaunch 
after NM reboot. Close it now.

> Auto-restart of containers should work across NM restarts.
> --
>
> Key: YARN-4770
> URL: https://issues.apache.org/jira/browse/YARN-4770
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Vinod Kumar Vavilapalli
>
> See my comment 
> [here|https://issues.apache.org/jira/browse/YARN-3998?focusedCommentId=15133367=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15133367]
>  on YARN-3998. Need to take care of two things:
>  - The relaunch feature needs to work across NM restarts, so we should save 
> the retry-context and policy per container into the state-store and reload it 
> for continue relaunching after NM restart.
>  - We should also handle restarting of any containers that may have crashed 
> during the NM reboot.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676635#comment-15676635
 ] 

Hadoop QA commented on YARN-4994:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  9m 13s{color} 
| {color:red} root generated 3 new + 688 unchanged - 3 fixed = 691 total (was 
691) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 37s{color} | {color:orange} root: The patch generated 1 new + 199 unchanged 
- 4 fixed = 200 total (was 203) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 41s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
24s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-archive-logs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-4994 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839545/YARN-4994.09.patch |
| Optional Tests |  asflicense  compile  javac  

[jira] [Commented] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676567#comment-15676567
 ] 

Hadoop QA commented on YARN-5912:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  0m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | YARN-5912 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839549/YARN-5912.001.patch |
| Optional Tests |  asflicense  |
| uname | Linux adb8313d8e20 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f6ffa11 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13974/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI
> --
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
> Attachments: YARN-5912.001.patch
>
>
> Fix breadcrumb issues in yarn-node-app and yarn-node-container pages in new 
> YARN UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5912:
---
Attachment: YARN-5912.001.patch

> [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI
> --
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
> Attachments: YARN-5912.001.patch
>
>
> Fix breadcrumb issues in yarn-node-app and yarn-node-container pages in new 
> YARN UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI

2016-11-18 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5912:
--
Priority: Minor  (was: Major)

> [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI
> --
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
>
> Fix breadcrumb issues in yarn-node-app and yarn-node-container pages in new 
> YARN UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5888) [YARN-3368] Improve unit tests for YARN UI

2016-11-18 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5888:
--
   Priority: Minor  (was: Major)
Description: 
- Add missing test cases in new YARN UI
- Fix test cases errors in new YARN UI 

  was:
Add missing test cases in new YARN UI
Fix test cases errors in new YARN UI 

Summary: [YARN-3368] Improve unit tests for YARN UI  (was: [YARN-3368] 
Add test cases in new YARN UI)

> [YARN-3368] Improve unit tests for YARN UI
> --
>
> Key: YARN-5888
> URL: https://issues.apache.org/jira/browse/YARN-5888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Minor
> Attachments: YARN-5888.001.patch
>
>
> - Add missing test cases in new YARN UI
> - Fix test cases errors in new YARN UI 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-11-18 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-4994:
---
Attachment: YARN-4994.09.patch

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
>  Labels: oct16-easy
> Fix For: 2.7.0
>
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch, YARN-4994.07.patch, YARN-4994.08.patch, YARN-4994.09.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5792) adopt the id prefix for YARN, MR, and DS entities

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676442#comment-15676442
 ] 

Hadoop QA commented on YARN-5792:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 10 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
24s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 8s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
12s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  2s{color} | {color:orange} root: The patch generated 14 new + 1180 
unchanged - 127 fixed = 1194 total (was 1307) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} 
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core 
generated 0 new + 2490 unchanged - 6 fixed = 2490 total (was 2496) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 47s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 39m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
22s{color} | {color:green} hadoop-yarn-applications-distributedshell in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} 

[jira] [Commented] (YARN-5888) [YARN-3368] Add test cases in new YARN UI

2016-11-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676403#comment-15676403
 ] 

Hadoop QA commented on YARN-5888:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-5888 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5888 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839542/YARN-5888.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13972/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add test cases in new YARN UI
> -
>
> Key: YARN-5888
> URL: https://issues.apache.org/jira/browse/YARN-5888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5888.001.patch
>
>
> Add missing test cases in new YARN UI
> Fix test cases errors in new YARN UI 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5888) [YARN-3368] Add test cases in new YARN UI

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5888:
---
Attachment: YARN-5888.001.patch

> [YARN-3368] Add test cases in new YARN UI
> -
>
> Key: YARN-5888
> URL: https://issues.apache.org/jira/browse/YARN-5888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
> Attachments: YARN-5888.001.patch
>
>
> Add missing test cases in new YARN UI
> Fix test cases errors in new YARN UI 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3881) Writing RM cluster-level metrics

2016-11-18 Thread Bingxue Qiu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676313#comment-15676313
 ] 

Bingxue Qiu commented on YARN-3881:
---

 Hi [~zjshen], I haven't find the totalVirtualCores / totalMB of cluster 
metrics in the metrics.json,  maybe it's necessary to show the water line 
trends when the nodes changes, just like add nodes or nodes fail?

> Writing RM cluster-level metrics
> 
>
> Key: YARN-3881
> URL: https://issues.apache.org/jira/browse/YARN-3881
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>  Labels: YARN-5355
> Attachments: metrics.json
>
>
> RM has a bunch of metrics that we may want to write into the timeline backend 
> to. I attached the metrics.json that I've crawled via 
> {{http://localhost:8088/jmx?qry=Hadoop:*}}. IMHO, we need to pay attention to 
> three groups of metrics:
> 1. QueueMetrics
> 2. JvmMetrics
> 3. ClusterMetrics
> The problem is that unlike other metrics belongs to a single application, 
> these ones belongs to RM or cluster-wide. Therefore, current write path is 
> not going to work for these metrics because they don't have the associated 
> user/flow/app context info. We need to rethink of modeling cross-app metrics 
> and the api to handle them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5912:
---
Description: 
Fix breadcrumb issues in yarn-node-app and yarn-node-container pages in new 
YARN UI.


  was:
Fix breadcrumbs issues in yarn-node-app and yarn-node-container pages in new 
YARN UI.



> [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI
> --
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> Fix breadcrumb issues in yarn-node-app and yarn-node-container pages in new 
> YARN UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5912:
---
Description: 
Fix breadcrumbs issues in yarn-node-app and yarn-node-container pages in new 
YARN UI.


  was:
Fix breadcrumbs issues in yarn-node-app and yarn-node-container pages.



> [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI
> --
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> Fix breadcrumbs issues in yarn-node-app and yarn-node-container pages in new 
> YARN UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5912:
---
Summary: [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI 
 (was: [YARN-3368] Fix breadcrumb issues in yarn-node page)

> [YARN-3368] Fix breadcrumb issues in yarn-node page in new YARN UI
> --
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> Fix breadcrumbs issues in yarn-node-app and yarn-node-container pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5912:
---
Description: 
Fix breadcrumbs issues in yarn-node-app and yarn-node-container pages.


  was:
Breadcrumbs in yarn-node-app and yarn-node-container pages does not work.
Fix


> [YARN-3368] Fix breadcrumb issues in yarn-node page
> ---
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> Fix breadcrumbs issues in yarn-node-app and yarn-node-container pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5912:
---
Description: 
Breadcrumbs in yarn-node-app and yarn-node-container pages does not work.
Fix

> [YARN-3368] Fix breadcrumb issues in yarn-node page
> ---
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>
> Breadcrumbs in yarn-node-app and yarn-node-container pages does not work.
> Fix



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB reassigned YARN-5912:
--

Assignee: Akhil PB

> [YARN-3368] Fix breadcrumb issues in yarn-node page
> ---
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page

2016-11-18 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5912:
---
Component/s: yarn-ui-v2

> [YARN-3368] Fix breadcrumb issues in yarn-node page
> ---
>
> Key: YARN-5912
> URL: https://issues.apache.org/jira/browse/YARN-5912
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5912) [YARN-3368] Fix breadcrumb issues in yarn-node page

2016-11-18 Thread Akhil PB (JIRA)
Akhil PB created YARN-5912:
--

 Summary: [YARN-3368] Fix breadcrumb issues in yarn-node page
 Key: YARN-5912
 URL: https://issues.apache.org/jira/browse/YARN-5912
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Akhil PB






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5548) Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart

2016-11-18 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676168#comment-15676168
 ] 

Bibin A Chundatt commented on YARN-5548:


[~varun_saxena]
Will look into the same. IIUC have to change to MemoryStateStore with 
MockRMMemoryStateStore

> Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy, test
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5548) Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart

2016-11-18 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15676162#comment-15676162
 ] 

Varun Saxena commented on YARN-5548:


Yes. Scope of this JIRA can be expanded.
[~bibinchundatt], you will be handling this ?

> Random test failure TestRMRestart#testFinishedAppRemovalAfterRMRestart
> --
>
> Key: YARN-5548
> URL: https://issues.apache.org/jira/browse/YARN-5548
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-easy, test
> Attachments: YARN-5548.0001.patch, YARN-5548.0002.patch, 
> YARN-5548.0003.patch
>
>
> https://builds.apache.org/job/PreCommit-YARN-Build/12850/testReport/org.apache.hadoop.yarn.server.resourcemanager/TestRMRestart/testFinishedAppRemovalAfterRMRestart/
> {noformat}
> Error Message
> Stacktrace
> java.lang.AssertionError: expected null, but was: application_submission_context { application_id { id: 1 cluster_timestamp: 
> 1471885197388 } application_name: "" queue: "default" priority { priority: 0 
> } am_container_spec { } cancel_tokens_when_complete: true maxAppAttempts: 2 
> resource { memory: 1024 virtual_cores: 1 } applicationType: "YARN" 
> keep_containers_across_application_attempts: false 
> attempt_failures_validity_interval: 0 am_container_resource_request { 
> priority { priority: 0 } resource_name: "*" capability { memory: 1024 
> virtual_cores: 1 } num_containers: 0 relax_locality: true 
> node_label_expression: "" execution_type_request { execution_type: GUARANTEED 
> enforce_execution_type: false } } } user: "jenkins" start_time: 1471885197417 
> application_state: RMAPP_FINISHED finish_time: 1471885197478>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMRestart.testFinishedAppRemovalAfterRMRestart(TestRMRestart.java:1656)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org